00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3688 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3289 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.027 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.065 Using shallow fetch with depth 1 00:00:00.065 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.065 > git --version # timeout=10 00:00:00.096 > git --version # 'git version 2.39.2' 00:00:00.096 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.138 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.138 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.188 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.198 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.210 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:05.210 > git config core.sparsecheckout # timeout=10 00:00:05.220 > git read-tree -mu HEAD # timeout=10 00:00:05.234 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:05.251 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:05.251 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:05.364 [Pipeline] Start of Pipeline 00:00:05.375 [Pipeline] library 00:00:05.377 Loading library shm_lib@master 00:00:05.377 Library shm_lib@master is cached. Copying from home. 00:00:05.390 [Pipeline] node 00:00:05.404 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.405 [Pipeline] { 00:00:05.413 [Pipeline] catchError 00:00:05.415 [Pipeline] { 00:00:05.424 [Pipeline] wrap 00:00:05.431 [Pipeline] { 00:00:05.436 [Pipeline] stage 00:00:05.438 [Pipeline] { (Prologue) 00:00:05.452 [Pipeline] echo 00:00:05.453 Node: VM-host-SM16 00:00:05.457 [Pipeline] cleanWs 00:00:05.464 [WS-CLEANUP] Deleting project workspace... 00:00:05.464 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.470 [WS-CLEANUP] done 00:00:05.717 [Pipeline] setCustomBuildProperty 00:00:05.781 [Pipeline] httpRequest 00:00:05.808 [Pipeline] echo 00:00:05.809 Sorcerer 10.211.164.101 is alive 00:00:05.816 [Pipeline] httpRequest 00:00:05.820 HttpMethod: GET 00:00:05.821 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:05.821 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:05.832 Response Code: HTTP/1.1 200 OK 00:00:05.833 Success: Status code 200 is in the accepted range: 200,404 00:00:05.833 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:13.662 [Pipeline] sh 00:00:13.944 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:13.961 [Pipeline] httpRequest 00:00:13.990 [Pipeline] echo 00:00:13.992 Sorcerer 10.211.164.101 is alive 00:00:14.003 [Pipeline] httpRequest 00:00:14.007 HttpMethod: GET 00:00:14.008 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:14.009 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:14.029 Response Code: HTTP/1.1 200 OK 00:00:14.029 Success: Status code 200 is in the accepted range: 200,404 00:00:14.030 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:12.468 [Pipeline] sh 00:01:12.744 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:15.285 [Pipeline] sh 00:01:15.567 + git -C spdk log --oneline -n5 00:01:15.567 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:15.567 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:15.567 3731556bd lvol: declare g_lvol_if static 00:01:15.567 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:15.567 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:15.586 [Pipeline] withCredentials 00:01:15.597 > git --version # timeout=10 00:01:15.609 > git --version # 'git version 2.39.2' 00:01:15.624 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:15.626 [Pipeline] { 00:01:15.636 [Pipeline] retry 00:01:15.638 [Pipeline] { 00:01:15.656 [Pipeline] sh 00:01:15.934 + git ls-remote http://dpdk.org/git/dpdk main 00:01:15.945 [Pipeline] } 00:01:15.967 [Pipeline] // retry 00:01:15.973 [Pipeline] } 00:01:15.994 [Pipeline] // withCredentials 00:01:16.004 [Pipeline] httpRequest 00:01:16.027 [Pipeline] echo 00:01:16.029 Sorcerer 10.211.164.101 is alive 00:01:16.038 [Pipeline] httpRequest 00:01:16.043 HttpMethod: GET 00:01:16.043 URL: http://10.211.164.101/packages/dpdk_90ec9b0db5c7bf7f911cb5ebcd8dfd15eb69c7dd.tar.gz 00:01:16.044 Sending request to url: http://10.211.164.101/packages/dpdk_90ec9b0db5c7bf7f911cb5ebcd8dfd15eb69c7dd.tar.gz 00:01:16.045 Response Code: HTTP/1.1 200 OK 00:01:16.045 Success: Status code 200 is in the accepted range: 200,404 00:01:16.046 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_90ec9b0db5c7bf7f911cb5ebcd8dfd15eb69c7dd.tar.gz 00:01:22.212 [Pipeline] sh 00:01:22.487 + tar --no-same-owner -xf dpdk_90ec9b0db5c7bf7f911cb5ebcd8dfd15eb69c7dd.tar.gz 00:01:23.875 [Pipeline] sh 00:01:24.153 + git -C dpdk log --oneline -n5 00:01:24.154 90ec9b0db5 net/mlx5: replenish MPRQ buffers for miniCQEs 00:01:24.154 3f11694354 net/mlx5: fix RSS and queue action validation 00:01:24.154 e6dfb25012 net/mlx5: fix action configuration validation 00:01:24.154 cf9a91c67b net/mlx5: fix disabling E-Switch default flow rules 00:01:24.154 463e5abe09 common/mlx5: remove unneeded field when modify RQ table 00:01:24.173 [Pipeline] writeFile 00:01:24.190 [Pipeline] sh 00:01:24.471 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:24.482 [Pipeline] sh 00:01:24.760 + cat autorun-spdk.conf 00:01:24.760 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.760 SPDK_TEST_NVMF=1 00:01:24.760 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.760 SPDK_TEST_URING=1 00:01:24.760 SPDK_TEST_USDT=1 00:01:24.760 SPDK_RUN_UBSAN=1 00:01:24.760 NET_TYPE=virt 00:01:24.760 SPDK_TEST_NATIVE_DPDK=main 00:01:24.760 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:24.760 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.769 RUN_NIGHTLY=1 00:01:24.771 [Pipeline] } 00:01:24.788 [Pipeline] // stage 00:01:24.803 [Pipeline] stage 00:01:24.806 [Pipeline] { (Run VM) 00:01:24.821 [Pipeline] sh 00:01:25.100 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:25.100 + echo 'Start stage prepare_nvme.sh' 00:01:25.100 Start stage prepare_nvme.sh 00:01:25.100 + [[ -n 7 ]] 00:01:25.100 + disk_prefix=ex7 00:01:25.100 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:25.100 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:25.100 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:25.100 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.100 ++ SPDK_TEST_NVMF=1 00:01:25.100 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.100 ++ SPDK_TEST_URING=1 00:01:25.100 ++ SPDK_TEST_USDT=1 00:01:25.100 ++ SPDK_RUN_UBSAN=1 00:01:25.100 ++ NET_TYPE=virt 00:01:25.100 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:25.100 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:25.100 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.100 ++ RUN_NIGHTLY=1 00:01:25.101 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:25.101 + nvme_files=() 00:01:25.101 + declare -A nvme_files 00:01:25.101 + backend_dir=/var/lib/libvirt/images/backends 00:01:25.101 + nvme_files['nvme.img']=5G 00:01:25.101 + nvme_files['nvme-cmb.img']=5G 00:01:25.101 + nvme_files['nvme-multi0.img']=4G 00:01:25.101 + nvme_files['nvme-multi1.img']=4G 00:01:25.101 + nvme_files['nvme-multi2.img']=4G 00:01:25.101 + nvme_files['nvme-openstack.img']=8G 00:01:25.101 + nvme_files['nvme-zns.img']=5G 00:01:25.101 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:25.101 + (( SPDK_TEST_FTL == 1 )) 00:01:25.101 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:25.101 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:25.101 + for nvme in "${!nvme_files[@]}" 00:01:25.101 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:25.101 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.101 + for nvme in "${!nvme_files[@]}" 00:01:25.101 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:25.668 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.668 + for nvme in "${!nvme_files[@]}" 00:01:25.668 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:25.668 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:25.668 + for nvme in "${!nvme_files[@]}" 00:01:25.668 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:25.668 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.668 + for nvme in "${!nvme_files[@]}" 00:01:25.668 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:25.925 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.925 + for nvme in "${!nvme_files[@]}" 00:01:25.925 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:25.925 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.925 + for nvme in "${!nvme_files[@]}" 00:01:25.925 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:26.491 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.491 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:26.491 + echo 'End stage prepare_nvme.sh' 00:01:26.491 End stage prepare_nvme.sh 00:01:26.502 [Pipeline] sh 00:01:26.781 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:26.781 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:01:26.781 00:01:26.781 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:26.781 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:26.781 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:26.781 HELP=0 00:01:26.781 DRY_RUN=0 00:01:26.781 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:26.781 NVME_DISKS_TYPE=nvme,nvme, 00:01:26.781 NVME_AUTO_CREATE=0 00:01:26.781 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:26.781 NVME_CMB=,, 00:01:26.781 NVME_PMR=,, 00:01:26.781 NVME_ZNS=,, 00:01:26.781 NVME_MS=,, 00:01:26.781 NVME_FDP=,, 00:01:26.781 SPDK_VAGRANT_DISTRO=fedora38 00:01:26.781 SPDK_VAGRANT_VMCPU=10 00:01:26.781 SPDK_VAGRANT_VMRAM=12288 00:01:26.781 SPDK_VAGRANT_PROVIDER=libvirt 00:01:26.781 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:26.781 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:26.781 SPDK_OPENSTACK_NETWORK=0 00:01:26.781 VAGRANT_PACKAGE_BOX=0 00:01:26.781 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:26.781 FORCE_DISTRO=true 00:01:26.781 VAGRANT_BOX_VERSION= 00:01:26.781 EXTRA_VAGRANTFILES= 00:01:26.781 NIC_MODEL=e1000 00:01:26.781 00:01:26.781 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:26.781 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:29.312 Bringing machine 'default' up with 'libvirt' provider... 00:01:29.881 ==> default: Creating image (snapshot of base box volume). 00:01:30.152 ==> default: Creating domain with the following settings... 00:01:30.152 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721707042_a9904a740fe62b839457 00:01:30.152 ==> default: -- Domain type: kvm 00:01:30.152 ==> default: -- Cpus: 10 00:01:30.152 ==> default: -- Feature: acpi 00:01:30.152 ==> default: -- Feature: apic 00:01:30.152 ==> default: -- Feature: pae 00:01:30.152 ==> default: -- Memory: 12288M 00:01:30.152 ==> default: -- Memory Backing: hugepages: 00:01:30.152 ==> default: -- Management MAC: 00:01:30.152 ==> default: -- Loader: 00:01:30.152 ==> default: -- Nvram: 00:01:30.152 ==> default: -- Base box: spdk/fedora38 00:01:30.152 ==> default: -- Storage pool: default 00:01:30.152 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721707042_a9904a740fe62b839457.img (20G) 00:01:30.152 ==> default: -- Volume Cache: default 00:01:30.152 ==> default: -- Kernel: 00:01:30.152 ==> default: -- Initrd: 00:01:30.152 ==> default: -- Graphics Type: vnc 00:01:30.152 ==> default: -- Graphics Port: -1 00:01:30.152 ==> default: -- Graphics IP: 127.0.0.1 00:01:30.152 ==> default: -- Graphics Password: Not defined 00:01:30.152 ==> default: -- Video Type: cirrus 00:01:30.152 ==> default: -- Video VRAM: 9216 00:01:30.152 ==> default: -- Sound Type: 00:01:30.152 ==> default: -- Keymap: en-us 00:01:30.152 ==> default: -- TPM Path: 00:01:30.152 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:30.152 ==> default: -- Command line args: 00:01:30.152 ==> default: -> value=-device, 00:01:30.152 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:30.152 ==> default: -> value=-drive, 00:01:30.152 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:30.152 ==> default: -> value=-device, 00:01:30.153 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.153 ==> default: -> value=-device, 00:01:30.153 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:30.153 ==> default: -> value=-drive, 00:01:30.153 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:30.153 ==> default: -> value=-device, 00:01:30.153 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.153 ==> default: -> value=-drive, 00:01:30.153 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:30.153 ==> default: -> value=-device, 00:01:30.153 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.153 ==> default: -> value=-drive, 00:01:30.153 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:30.153 ==> default: -> value=-device, 00:01:30.153 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.153 ==> default: Creating shared folders metadata... 00:01:30.153 ==> default: Starting domain. 00:01:32.067 ==> default: Waiting for domain to get an IP address... 00:01:50.148 ==> default: Waiting for SSH to become available... 00:01:50.148 ==> default: Configuring and enabling network interfaces... 00:01:53.432 default: SSH address: 192.168.121.89:22 00:01:53.432 default: SSH username: vagrant 00:01:53.432 default: SSH auth method: private key 00:01:55.334 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:01.907 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:08.469 ==> default: Mounting SSHFS shared folder... 00:02:09.846 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:09.846 ==> default: Checking Mount.. 00:02:10.788 ==> default: Folder Successfully Mounted! 00:02:10.788 ==> default: Running provisioner: file... 00:02:11.724 default: ~/.gitconfig => .gitconfig 00:02:11.983 00:02:11.983 SUCCESS! 00:02:11.983 00:02:11.983 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:11.983 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:11.983 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:11.983 00:02:11.992 [Pipeline] } 00:02:12.009 [Pipeline] // stage 00:02:12.019 [Pipeline] dir 00:02:12.019 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:12.021 [Pipeline] { 00:02:12.035 [Pipeline] catchError 00:02:12.037 [Pipeline] { 00:02:12.051 [Pipeline] sh 00:02:12.329 + vagrant ssh-config --host vagrant 00:02:12.329 + sed -ne /^Host/,$p 00:02:12.329 + tee ssh_conf 00:02:15.614 Host vagrant 00:02:15.614 HostName 192.168.121.89 00:02:15.614 User vagrant 00:02:15.614 Port 22 00:02:15.614 UserKnownHostsFile /dev/null 00:02:15.614 StrictHostKeyChecking no 00:02:15.614 PasswordAuthentication no 00:02:15.614 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:15.614 IdentitiesOnly yes 00:02:15.614 LogLevel FATAL 00:02:15.614 ForwardAgent yes 00:02:15.614 ForwardX11 yes 00:02:15.614 00:02:15.628 [Pipeline] withEnv 00:02:15.630 [Pipeline] { 00:02:15.646 [Pipeline] sh 00:02:15.925 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:15.925 source /etc/os-release 00:02:15.925 [[ -e /image.version ]] && img=$(< /image.version) 00:02:15.925 # Minimal, systemd-like check. 00:02:15.925 if [[ -e /.dockerenv ]]; then 00:02:15.925 # Clear garbage from the node's name: 00:02:15.925 # agt-er_autotest_547-896 -> autotest_547-896 00:02:15.925 # $HOSTNAME is the actual container id 00:02:15.925 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:15.925 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:15.925 # We can assume this is a mount from a host where container is running, 00:02:15.925 # so fetch its hostname to easily identify the target swarm worker. 00:02:15.925 container="$(< /etc/hostname) ($agent)" 00:02:15.925 else 00:02:15.925 # Fallback 00:02:15.925 container=$agent 00:02:15.925 fi 00:02:15.925 fi 00:02:15.925 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:15.925 00:02:16.194 [Pipeline] } 00:02:16.213 [Pipeline] // withEnv 00:02:16.223 [Pipeline] setCustomBuildProperty 00:02:16.239 [Pipeline] stage 00:02:16.242 [Pipeline] { (Tests) 00:02:16.260 [Pipeline] sh 00:02:16.539 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:16.553 [Pipeline] sh 00:02:16.831 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:16.847 [Pipeline] timeout 00:02:16.848 Timeout set to expire in 30 min 00:02:16.850 [Pipeline] { 00:02:16.866 [Pipeline] sh 00:02:17.144 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:17.710 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:02:17.723 [Pipeline] sh 00:02:18.001 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:18.272 [Pipeline] sh 00:02:18.550 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:18.566 [Pipeline] sh 00:02:18.844 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:18.844 ++ readlink -f spdk_repo 00:02:18.844 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:18.844 + [[ -n /home/vagrant/spdk_repo ]] 00:02:18.844 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:18.844 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:18.844 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:18.844 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:18.844 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:18.844 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:18.844 + cd /home/vagrant/spdk_repo 00:02:18.844 + source /etc/os-release 00:02:18.844 ++ NAME='Fedora Linux' 00:02:18.844 ++ VERSION='38 (Cloud Edition)' 00:02:18.844 ++ ID=fedora 00:02:18.844 ++ VERSION_ID=38 00:02:18.844 ++ VERSION_CODENAME= 00:02:18.844 ++ PLATFORM_ID=platform:f38 00:02:18.844 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:18.844 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:18.844 ++ LOGO=fedora-logo-icon 00:02:18.844 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:18.844 ++ HOME_URL=https://fedoraproject.org/ 00:02:18.844 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:18.844 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:18.844 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:18.844 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:18.844 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:18.844 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:18.844 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:18.844 ++ SUPPORT_END=2024-05-14 00:02:18.844 ++ VARIANT='Cloud Edition' 00:02:18.844 ++ VARIANT_ID=cloud 00:02:18.844 + uname -a 00:02:18.844 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:18.844 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:19.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:19.411 Hugepages 00:02:19.411 node hugesize free / total 00:02:19.411 node0 1048576kB 0 / 0 00:02:19.411 node0 2048kB 0 / 0 00:02:19.411 00:02:19.411 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.411 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:19.411 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:19.411 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:19.411 + rm -f /tmp/spdk-ld-path 00:02:19.411 + source autorun-spdk.conf 00:02:19.411 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.411 ++ SPDK_TEST_NVMF=1 00:02:19.411 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:19.411 ++ SPDK_TEST_URING=1 00:02:19.411 ++ SPDK_TEST_USDT=1 00:02:19.411 ++ SPDK_RUN_UBSAN=1 00:02:19.411 ++ NET_TYPE=virt 00:02:19.411 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:19.411 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:19.411 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.411 ++ RUN_NIGHTLY=1 00:02:19.411 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:19.411 + [[ -n '' ]] 00:02:19.411 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:19.669 + for M in /var/spdk/build-*-manifest.txt 00:02:19.669 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:19.669 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.669 + for M in /var/spdk/build-*-manifest.txt 00:02:19.669 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:19.669 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.669 ++ uname 00:02:19.669 + [[ Linux == \L\i\n\u\x ]] 00:02:19.669 + sudo dmesg -T 00:02:19.669 + sudo dmesg --clear 00:02:19.669 + dmesg_pid=6005 00:02:19.669 + sudo dmesg -Tw 00:02:19.669 + [[ Fedora Linux == FreeBSD ]] 00:02:19.669 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.669 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.669 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:19.669 + [[ -x /usr/src/fio-static/fio ]] 00:02:19.669 + export FIO_BIN=/usr/src/fio-static/fio 00:02:19.669 + FIO_BIN=/usr/src/fio-static/fio 00:02:19.669 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:19.669 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:19.669 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:19.669 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.669 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.669 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:19.669 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.669 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.669 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.669 Test configuration: 00:02:19.669 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.669 SPDK_TEST_NVMF=1 00:02:19.669 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:19.669 SPDK_TEST_URING=1 00:02:19.669 SPDK_TEST_USDT=1 00:02:19.669 SPDK_RUN_UBSAN=1 00:02:19.669 NET_TYPE=virt 00:02:19.669 SPDK_TEST_NATIVE_DPDK=main 00:02:19.669 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:19.669 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.669 RUN_NIGHTLY=1 03:58:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:19.669 03:58:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:19.669 03:58:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.669 03:58:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.669 03:58:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.669 03:58:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.669 03:58:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.669 03:58:12 -- paths/export.sh@5 -- $ export PATH 00:02:19.669 03:58:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.669 03:58:12 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:19.669 03:58:12 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:19.669 03:58:12 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721707092.XXXXXX 00:02:19.669 03:58:12 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721707092.ygbync 00:02:19.669 03:58:12 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:19.669 03:58:12 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:02:19.669 03:58:12 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:19.669 03:58:12 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:19.669 03:58:12 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:19.669 03:58:12 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:19.669 03:58:12 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:19.669 03:58:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:19.669 03:58:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.669 03:58:12 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:19.669 03:58:12 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:19.669 03:58:12 -- pm/common@17 -- $ local monitor 00:02:19.669 03:58:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.669 03:58:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.669 03:58:12 -- pm/common@25 -- $ sleep 1 00:02:19.669 03:58:12 -- pm/common@21 -- $ date +%s 00:02:19.669 03:58:12 -- pm/common@21 -- $ date +%s 00:02:19.669 03:58:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721707092 00:02:19.669 03:58:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721707092 00:02:19.669 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721707092_collect-vmstat.pm.log 00:02:19.669 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721707092_collect-cpu-load.pm.log 00:02:21.043 03:58:13 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:21.043 03:58:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.043 03:58:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.043 03:58:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.043 03:58:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.043 Tue Jul 23 03:58:13 AM UTC 2024 00:02:21.043 03:58:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.043 v24.09-pre-297-gf7b31b2b9 00:02:21.043 03:58:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:21.043 03:58:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.043 03:58:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.043 03:58:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:21.043 03:58:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.043 03:58:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.043 ************************************ 00:02:21.043 START TEST ubsan 00:02:21.043 ************************************ 00:02:21.043 using ubsan 00:02:21.043 03:58:14 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:21.043 00:02:21.043 real 0m0.000s 00:02:21.043 user 0m0.000s 00:02:21.043 sys 0m0.000s 00:02:21.043 03:58:14 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:21.043 ************************************ 00:02:21.043 03:58:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.043 END TEST ubsan 00:02:21.043 ************************************ 00:02:21.043 03:58:14 -- common/autotest_common.sh@1142 -- $ return 0 00:02:21.043 03:58:14 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:21.043 03:58:14 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:21.043 03:58:14 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:21.043 03:58:14 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:21.043 03:58:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.043 03:58:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.043 ************************************ 00:02:21.043 START TEST build_native_dpdk 00:02:21.043 ************************************ 00:02:21.043 03:58:14 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:21.043 03:58:14 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:21.043 90ec9b0db5 net/mlx5: replenish MPRQ buffers for miniCQEs 00:02:21.043 3f11694354 net/mlx5: fix RSS and queue action validation 00:02:21.043 e6dfb25012 net/mlx5: fix action configuration validation 00:02:21.044 cf9a91c67b net/mlx5: fix disabling E-Switch default flow rules 00:02:21.044 463e5abe09 common/mlx5: remove unneeded field when modify RQ table 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:21.044 patching file config/rte_config.h 00:02:21.044 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc2 24.07.0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 24.07.0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc2 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc2 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc2 =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^0x ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc2 =~ ^[a-f0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:02:21.044 03:58:14 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:21.044 03:58:14 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:21.045 03:58:14 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:26.306 The Meson build system 00:02:26.306 Version: 1.3.1 00:02:26.306 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:26.306 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:26.306 Build type: native build 00:02:26.306 Program cat found: YES (/usr/bin/cat) 00:02:26.306 Project name: DPDK 00:02:26.306 Project version: 24.07.0-rc2 00:02:26.306 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:26.306 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:26.306 Host machine cpu family: x86_64 00:02:26.306 Host machine cpu: x86_64 00:02:26.306 Message: ## Building in Developer Mode ## 00:02:26.306 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.306 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:26.306 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.306 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:26.306 Program cat found: YES (/usr/bin/cat) 00:02:26.306 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:26.306 Compiler for C supports arguments -march=native: YES 00:02:26.306 Checking for size of "void *" : 8 00:02:26.306 Checking for size of "void *" : 8 (cached) 00:02:26.306 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:26.306 Library m found: YES 00:02:26.306 Library numa found: YES 00:02:26.306 Has header "numaif.h" : YES 00:02:26.306 Library fdt found: NO 00:02:26.306 Library execinfo found: NO 00:02:26.306 Has header "execinfo.h" : YES 00:02:26.306 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:26.306 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.306 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.306 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.306 Run-time dependency openssl found: YES 3.0.9 00:02:26.306 Run-time dependency libpcap found: YES 1.10.4 00:02:26.306 Has header "pcap.h" with dependency libpcap: YES 00:02:26.306 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.306 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.306 Compiler for C supports arguments -Wformat: YES 00:02:26.306 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.306 Compiler for C supports arguments -Wformat-security: NO 00:02:26.306 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.307 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.307 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.307 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.307 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.307 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.307 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.307 Compiler for C supports arguments -Wundef: YES 00:02:26.307 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.307 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.307 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.307 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.307 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.307 Program objdump found: YES (/usr/bin/objdump) 00:02:26.307 Compiler for C supports arguments -mavx512f: YES 00:02:26.307 Checking if "AVX512 checking" compiles: YES 00:02:26.307 Fetching value of define "__SSE4_2__" : 1 00:02:26.307 Fetching value of define "__AES__" : 1 00:02:26.307 Fetching value of define "__AVX__" : 1 00:02:26.307 Fetching value of define "__AVX2__" : 1 00:02:26.307 Fetching value of define "__AVX512BW__" : (undefined) 00:02:26.307 Fetching value of define "__AVX512CD__" : (undefined) 00:02:26.307 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:26.307 Fetching value of define "__AVX512F__" : (undefined) 00:02:26.307 Fetching value of define "__AVX512VL__" : (undefined) 00:02:26.307 Fetching value of define "__PCLMUL__" : 1 00:02:26.307 Fetching value of define "__RDRND__" : 1 00:02:26.307 Fetching value of define "__RDSEED__" : 1 00:02:26.307 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:26.307 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.307 Message: lib/log: Defining dependency "log" 00:02:26.307 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.307 Message: lib/argparse: Defining dependency "argparse" 00:02:26.307 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.307 Checking for function "getentropy" : NO 00:02:26.307 Message: lib/eal: Defining dependency "eal" 00:02:26.307 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:26.307 Message: lib/ring: Defining dependency "ring" 00:02:26.307 Message: lib/rcu: Defining dependency "rcu" 00:02:26.307 Message: lib/mempool: Defining dependency "mempool" 00:02:26.307 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.307 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.307 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.307 Compiler for C supports arguments -mpclmul: YES 00:02:26.307 Compiler for C supports arguments -maes: YES 00:02:26.307 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.307 Compiler for C supports arguments -mavx512bw: YES 00:02:26.307 Compiler for C supports arguments -mavx512dq: YES 00:02:26.307 Compiler for C supports arguments -mavx512vl: YES 00:02:26.307 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.307 Compiler for C supports arguments -mavx2: YES 00:02:26.307 Compiler for C supports arguments -mavx: YES 00:02:26.307 Message: lib/net: Defining dependency "net" 00:02:26.307 Message: lib/meter: Defining dependency "meter" 00:02:26.307 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.307 Message: lib/pci: Defining dependency "pci" 00:02:26.307 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.307 Message: lib/metrics: Defining dependency "metrics" 00:02:26.307 Message: lib/hash: Defining dependency "hash" 00:02:26.307 Message: lib/timer: Defining dependency "timer" 00:02:26.307 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.307 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:26.307 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:26.307 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:26.307 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:26.307 Message: lib/acl: Defining dependency "acl" 00:02:26.307 Message: lib/bbdev: Defining dependency "bbdev" 00:02:26.307 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:26.307 Run-time dependency libelf found: YES 0.190 00:02:26.307 Message: lib/bpf: Defining dependency "bpf" 00:02:26.307 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:26.307 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.307 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.307 Message: lib/distributor: Defining dependency "distributor" 00:02:26.307 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.307 Message: lib/efd: Defining dependency "efd" 00:02:26.307 Message: lib/eventdev: Defining dependency "eventdev" 00:02:26.307 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:26.307 Message: lib/gpudev: Defining dependency "gpudev" 00:02:26.307 Message: lib/gro: Defining dependency "gro" 00:02:26.307 Message: lib/gso: Defining dependency "gso" 00:02:26.307 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:26.307 Message: lib/jobstats: Defining dependency "jobstats" 00:02:26.307 Message: lib/latencystats: Defining dependency "latencystats" 00:02:26.307 Message: lib/lpm: Defining dependency "lpm" 00:02:26.307 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.307 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:26.307 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:26.307 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:26.307 Message: lib/member: Defining dependency "member" 00:02:26.307 Message: lib/pcapng: Defining dependency "pcapng" 00:02:26.307 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.307 Message: lib/power: Defining dependency "power" 00:02:26.307 Message: lib/rawdev: Defining dependency "rawdev" 00:02:26.307 Message: lib/regexdev: Defining dependency "regexdev" 00:02:26.307 Message: lib/mldev: Defining dependency "mldev" 00:02:26.307 Message: lib/rib: Defining dependency "rib" 00:02:26.307 Message: lib/reorder: Defining dependency "reorder" 00:02:26.307 Message: lib/sched: Defining dependency "sched" 00:02:26.307 Message: lib/security: Defining dependency "security" 00:02:26.307 Message: lib/stack: Defining dependency "stack" 00:02:26.307 Has header "linux/userfaultfd.h" : YES 00:02:26.307 Has header "linux/vduse.h" : YES 00:02:26.307 Message: lib/vhost: Defining dependency "vhost" 00:02:26.307 Message: lib/ipsec: Defining dependency "ipsec" 00:02:26.307 Message: lib/pdcp: Defining dependency "pdcp" 00:02:26.307 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.307 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:26.307 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:26.307 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:26.307 Message: lib/fib: Defining dependency "fib" 00:02:26.307 Message: lib/port: Defining dependency "port" 00:02:26.307 Message: lib/pdump: Defining dependency "pdump" 00:02:26.307 Message: lib/table: Defining dependency "table" 00:02:26.307 Message: lib/pipeline: Defining dependency "pipeline" 00:02:26.307 Message: lib/graph: Defining dependency "graph" 00:02:26.307 Message: lib/node: Defining dependency "node" 00:02:26.307 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:27.680 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:27.680 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:27.680 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.680 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:27.680 Compiler for C supports arguments -Wno-unused-value: YES 00:02:27.680 Compiler for C supports arguments -Wno-format: YES 00:02:27.680 Compiler for C supports arguments -Wno-format-security: YES 00:02:27.680 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:27.680 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:27.680 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:27.680 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:27.680 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.680 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.680 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:27.680 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:27.680 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:27.680 Has header "sys/epoll.h" : YES 00:02:27.680 Program doxygen found: YES (/usr/bin/doxygen) 00:02:27.680 Configuring doxy-api-html.conf using configuration 00:02:27.680 Configuring doxy-api-man.conf using configuration 00:02:27.680 Program mandb found: YES (/usr/bin/mandb) 00:02:27.680 Program sphinx-build found: NO 00:02:27.680 Configuring rte_build_config.h using configuration 00:02:27.680 Message: 00:02:27.680 ================= 00:02:27.680 Applications Enabled 00:02:27.680 ================= 00:02:27.680 00:02:27.680 apps: 00:02:27.680 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:27.680 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:27.680 test-pmd, test-regex, test-sad, test-security-perf, 00:02:27.680 00:02:27.680 Message: 00:02:27.680 ================= 00:02:27.680 Libraries Enabled 00:02:27.680 ================= 00:02:27.680 00:02:27.680 libs: 00:02:27.680 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:27.680 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:27.681 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:27.681 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:27.681 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:27.681 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:27.681 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:27.681 graph, node, 00:02:27.681 00:02:27.681 Message: 00:02:27.681 =============== 00:02:27.681 Drivers Enabled 00:02:27.681 =============== 00:02:27.681 00:02:27.681 common: 00:02:27.681 00:02:27.681 bus: 00:02:27.681 pci, vdev, 00:02:27.681 mempool: 00:02:27.681 ring, 00:02:27.681 dma: 00:02:27.681 00:02:27.681 net: 00:02:27.681 i40e, 00:02:27.681 raw: 00:02:27.681 00:02:27.681 crypto: 00:02:27.681 00:02:27.681 compress: 00:02:27.681 00:02:27.681 regex: 00:02:27.681 00:02:27.681 ml: 00:02:27.681 00:02:27.681 vdpa: 00:02:27.681 00:02:27.681 event: 00:02:27.681 00:02:27.681 baseband: 00:02:27.681 00:02:27.681 gpu: 00:02:27.681 00:02:27.681 00:02:27.681 Message: 00:02:27.681 ================= 00:02:27.681 Content Skipped 00:02:27.681 ================= 00:02:27.681 00:02:27.681 apps: 00:02:27.681 00:02:27.681 libs: 00:02:27.681 00:02:27.681 drivers: 00:02:27.681 common/cpt: not in enabled drivers build config 00:02:27.681 common/dpaax: not in enabled drivers build config 00:02:27.681 common/iavf: not in enabled drivers build config 00:02:27.681 common/idpf: not in enabled drivers build config 00:02:27.681 common/ionic: not in enabled drivers build config 00:02:27.681 common/mvep: not in enabled drivers build config 00:02:27.681 common/octeontx: not in enabled drivers build config 00:02:27.681 bus/auxiliary: not in enabled drivers build config 00:02:27.681 bus/cdx: not in enabled drivers build config 00:02:27.681 bus/dpaa: not in enabled drivers build config 00:02:27.681 bus/fslmc: not in enabled drivers build config 00:02:27.681 bus/ifpga: not in enabled drivers build config 00:02:27.681 bus/platform: not in enabled drivers build config 00:02:27.681 bus/uacce: not in enabled drivers build config 00:02:27.681 bus/vmbus: not in enabled drivers build config 00:02:27.681 common/cnxk: not in enabled drivers build config 00:02:27.681 common/mlx5: not in enabled drivers build config 00:02:27.681 common/nfp: not in enabled drivers build config 00:02:27.681 common/nitrox: not in enabled drivers build config 00:02:27.681 common/qat: not in enabled drivers build config 00:02:27.681 common/sfc_efx: not in enabled drivers build config 00:02:27.681 mempool/bucket: not in enabled drivers build config 00:02:27.681 mempool/cnxk: not in enabled drivers build config 00:02:27.681 mempool/dpaa: not in enabled drivers build config 00:02:27.681 mempool/dpaa2: not in enabled drivers build config 00:02:27.681 mempool/octeontx: not in enabled drivers build config 00:02:27.681 mempool/stack: not in enabled drivers build config 00:02:27.681 dma/cnxk: not in enabled drivers build config 00:02:27.681 dma/dpaa: not in enabled drivers build config 00:02:27.681 dma/dpaa2: not in enabled drivers build config 00:02:27.681 dma/hisilicon: not in enabled drivers build config 00:02:27.681 dma/idxd: not in enabled drivers build config 00:02:27.681 dma/ioat: not in enabled drivers build config 00:02:27.681 dma/odm: not in enabled drivers build config 00:02:27.681 dma/skeleton: not in enabled drivers build config 00:02:27.681 net/af_packet: not in enabled drivers build config 00:02:27.681 net/af_xdp: not in enabled drivers build config 00:02:27.681 net/ark: not in enabled drivers build config 00:02:27.681 net/atlantic: not in enabled drivers build config 00:02:27.681 net/avp: not in enabled drivers build config 00:02:27.681 net/axgbe: not in enabled drivers build config 00:02:27.681 net/bnx2x: not in enabled drivers build config 00:02:27.681 net/bnxt: not in enabled drivers build config 00:02:27.681 net/bonding: not in enabled drivers build config 00:02:27.681 net/cnxk: not in enabled drivers build config 00:02:27.681 net/cpfl: not in enabled drivers build config 00:02:27.681 net/cxgbe: not in enabled drivers build config 00:02:27.681 net/dpaa: not in enabled drivers build config 00:02:27.681 net/dpaa2: not in enabled drivers build config 00:02:27.681 net/e1000: not in enabled drivers build config 00:02:27.681 net/ena: not in enabled drivers build config 00:02:27.681 net/enetc: not in enabled drivers build config 00:02:27.681 net/enetfec: not in enabled drivers build config 00:02:27.681 net/enic: not in enabled drivers build config 00:02:27.681 net/failsafe: not in enabled drivers build config 00:02:27.681 net/fm10k: not in enabled drivers build config 00:02:27.681 net/gve: not in enabled drivers build config 00:02:27.681 net/hinic: not in enabled drivers build config 00:02:27.681 net/hns3: not in enabled drivers build config 00:02:27.681 net/iavf: not in enabled drivers build config 00:02:27.681 net/ice: not in enabled drivers build config 00:02:27.681 net/idpf: not in enabled drivers build config 00:02:27.681 net/igc: not in enabled drivers build config 00:02:27.681 net/ionic: not in enabled drivers build config 00:02:27.681 net/ipn3ke: not in enabled drivers build config 00:02:27.681 net/ixgbe: not in enabled drivers build config 00:02:27.681 net/mana: not in enabled drivers build config 00:02:27.681 net/memif: not in enabled drivers build config 00:02:27.681 net/mlx4: not in enabled drivers build config 00:02:27.681 net/mlx5: not in enabled drivers build config 00:02:27.681 net/mvneta: not in enabled drivers build config 00:02:27.681 net/mvpp2: not in enabled drivers build config 00:02:27.681 net/netvsc: not in enabled drivers build config 00:02:27.681 net/nfb: not in enabled drivers build config 00:02:27.681 net/nfp: not in enabled drivers build config 00:02:27.681 net/ngbe: not in enabled drivers build config 00:02:27.681 net/null: not in enabled drivers build config 00:02:27.681 net/octeontx: not in enabled drivers build config 00:02:27.681 net/octeon_ep: not in enabled drivers build config 00:02:27.681 net/pcap: not in enabled drivers build config 00:02:27.681 net/pfe: not in enabled drivers build config 00:02:27.681 net/qede: not in enabled drivers build config 00:02:27.681 net/ring: not in enabled drivers build config 00:02:27.681 net/sfc: not in enabled drivers build config 00:02:27.681 net/softnic: not in enabled drivers build config 00:02:27.681 net/tap: not in enabled drivers build config 00:02:27.681 net/thunderx: not in enabled drivers build config 00:02:27.681 net/txgbe: not in enabled drivers build config 00:02:27.681 net/vdev_netvsc: not in enabled drivers build config 00:02:27.681 net/vhost: not in enabled drivers build config 00:02:27.681 net/virtio: not in enabled drivers build config 00:02:27.681 net/vmxnet3: not in enabled drivers build config 00:02:27.681 raw/cnxk_bphy: not in enabled drivers build config 00:02:27.681 raw/cnxk_gpio: not in enabled drivers build config 00:02:27.681 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:27.681 raw/ifpga: not in enabled drivers build config 00:02:27.681 raw/ntb: not in enabled drivers build config 00:02:27.681 raw/skeleton: not in enabled drivers build config 00:02:27.681 crypto/armv8: not in enabled drivers build config 00:02:27.681 crypto/bcmfs: not in enabled drivers build config 00:02:27.681 crypto/caam_jr: not in enabled drivers build config 00:02:27.681 crypto/ccp: not in enabled drivers build config 00:02:27.681 crypto/cnxk: not in enabled drivers build config 00:02:27.681 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.681 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.681 crypto/ionic: not in enabled drivers build config 00:02:27.681 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.681 crypto/mlx5: not in enabled drivers build config 00:02:27.681 crypto/mvsam: not in enabled drivers build config 00:02:27.681 crypto/nitrox: not in enabled drivers build config 00:02:27.681 crypto/null: not in enabled drivers build config 00:02:27.681 crypto/octeontx: not in enabled drivers build config 00:02:27.681 crypto/openssl: not in enabled drivers build config 00:02:27.681 crypto/scheduler: not in enabled drivers build config 00:02:27.681 crypto/uadk: not in enabled drivers build config 00:02:27.681 crypto/virtio: not in enabled drivers build config 00:02:27.681 compress/isal: not in enabled drivers build config 00:02:27.681 compress/mlx5: not in enabled drivers build config 00:02:27.681 compress/nitrox: not in enabled drivers build config 00:02:27.681 compress/octeontx: not in enabled drivers build config 00:02:27.681 compress/uadk: not in enabled drivers build config 00:02:27.681 compress/zlib: not in enabled drivers build config 00:02:27.681 regex/mlx5: not in enabled drivers build config 00:02:27.681 regex/cn9k: not in enabled drivers build config 00:02:27.681 ml/cnxk: not in enabled drivers build config 00:02:27.681 vdpa/ifc: not in enabled drivers build config 00:02:27.681 vdpa/mlx5: not in enabled drivers build config 00:02:27.681 vdpa/nfp: not in enabled drivers build config 00:02:27.681 vdpa/sfc: not in enabled drivers build config 00:02:27.681 event/cnxk: not in enabled drivers build config 00:02:27.681 event/dlb2: not in enabled drivers build config 00:02:27.681 event/dpaa: not in enabled drivers build config 00:02:27.681 event/dpaa2: not in enabled drivers build config 00:02:27.681 event/dsw: not in enabled drivers build config 00:02:27.681 event/opdl: not in enabled drivers build config 00:02:27.681 event/skeleton: not in enabled drivers build config 00:02:27.681 event/sw: not in enabled drivers build config 00:02:27.681 event/octeontx: not in enabled drivers build config 00:02:27.681 baseband/acc: not in enabled drivers build config 00:02:27.681 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:27.681 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:27.681 baseband/la12xx: not in enabled drivers build config 00:02:27.681 baseband/null: not in enabled drivers build config 00:02:27.681 baseband/turbo_sw: not in enabled drivers build config 00:02:27.682 gpu/cuda: not in enabled drivers build config 00:02:27.682 00:02:27.682 00:02:27.682 Build targets in project: 224 00:02:27.682 00:02:27.682 DPDK 24.07.0-rc2 00:02:27.682 00:02:27.682 User defined options 00:02:27.682 libdir : lib 00:02:27.682 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:27.682 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:27.682 c_link_args : 00:02:27.682 enable_docs : false 00:02:27.682 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.682 enable_kmods : false 00:02:27.682 machine : native 00:02:27.682 tests : false 00:02:27.682 00:02:27.682 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.682 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:27.682 03:58:20 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:27.682 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:27.682 [1/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:27.939 [2/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.939 [3/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:27.939 [4/723] Linking static target lib/librte_kvargs.a 00:02:27.939 [5/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:27.939 [6/723] Linking static target lib/librte_log.a 00:02:27.939 [7/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:27.939 [8/723] Linking static target lib/librte_argparse.a 00:02:28.197 [9/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.197 [10/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.197 [11/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:28.197 [12/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:28.197 [13/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:28.455 [14/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.455 [15/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.455 [16/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.455 [17/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:28.455 [18/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.455 [19/723] Linking target lib/librte_log.so.24.2 00:02:28.455 [20/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.714 [21/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:28.714 [22/723] Linking target lib/librte_kvargs.so.24.2 00:02:28.714 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.976 [24/723] Linking target lib/librte_argparse.so.24.2 00:02:28.976 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.976 [26/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:28.976 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.976 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.976 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:29.245 [30/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:29.245 [31/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.245 [32/723] Linking static target lib/librte_telemetry.a 00:02:29.245 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:29.245 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:29.245 [35/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:29.506 [36/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.506 [37/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:29.506 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:29.506 [39/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:29.506 [40/723] Linking target lib/librte_telemetry.so.24.2 00:02:29.506 [41/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:29.764 [42/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:29.764 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:29.764 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:29.764 [45/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:29.764 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:29.764 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:29.764 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.022 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.279 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:30.279 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:30.279 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.279 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:30.537 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:30.537 [55/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:30.537 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:30.537 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:30.794 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:30.794 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:30.794 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.794 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:30.794 [62/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:31.052 [63/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:31.052 [64/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.052 [65/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:31.052 [66/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:31.052 [67/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:31.052 [68/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.309 [69/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:31.309 [70/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.309 [71/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.567 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.567 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:31.567 [74/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.825 [75/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.825 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.825 [77/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.825 [78/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.825 [79/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.825 [80/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.825 [81/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.083 [82/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:32.083 [83/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:32.083 [84/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.341 [85/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:32.341 [86/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:32.341 [87/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:32.341 [88/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:32.599 [89/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:32.599 [90/723] Linking static target lib/librte_ring.a 00:02:32.857 [91/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:32.857 [92/723] Linking static target lib/librte_eal.a 00:02:32.857 [93/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.857 [94/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:32.857 [95/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.857 [96/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.115 [97/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:33.115 [98/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:33.115 [99/723] Linking static target lib/librte_mempool.a 00:02:33.115 [100/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:33.420 [101/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:33.420 [102/723] Linking static target lib/librte_rcu.a 00:02:33.420 [103/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:33.420 [104/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:33.678 [105/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:33.678 [106/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.678 [107/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:33.678 [108/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:33.678 [109/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.678 [110/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:33.678 [111/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.935 [112/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.935 [113/723] Linking static target lib/librte_mbuf.a 00:02:34.193 [114/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.193 [115/723] Linking static target lib/librte_net.a 00:02:34.193 [116/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.193 [117/723] Linking static target lib/librte_meter.a 00:02:34.450 [118/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:34.450 [119/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:34.450 [120/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.450 [121/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.450 [122/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.450 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:34.450 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.015 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.272 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.272 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.529 [128/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.530 [129/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.530 [130/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.530 [131/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:35.530 [132/723] Linking static target lib/librte_pci.a 00:02:35.530 [133/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:35.787 [134/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.787 [135/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:35.787 [136/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:36.045 [137/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:36.045 [138/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:36.045 [139/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:36.045 [140/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:36.045 [141/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:36.045 [142/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:36.045 [143/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:36.045 [144/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:36.045 [145/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:36.303 [146/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:36.303 [147/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:36.303 [148/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:36.560 [149/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:36.560 [150/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:36.560 [151/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:36.560 [152/723] Linking static target lib/librte_cmdline.a 00:02:36.818 [153/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:36.818 [154/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:36.818 [155/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:36.818 [156/723] Linking static target lib/librte_metrics.a 00:02:36.818 [157/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:37.075 [158/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:37.333 [159/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.590 [160/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:37.590 [161/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.591 [162/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:37.849 [163/723] Linking static target lib/librte_timer.a 00:02:38.107 [164/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.107 [165/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:38.107 [166/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:38.365 [167/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:38.623 [168/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:38.881 [169/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:38.881 [170/723] Linking static target lib/librte_ethdev.a 00:02:39.140 [171/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:39.140 [172/723] Linking static target lib/librte_bitratestats.a 00:02:39.140 [173/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:39.140 [174/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:39.140 [175/723] Linking static target lib/librte_bbdev.a 00:02:39.140 [176/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:39.140 [177/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:39.140 [178/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.140 [179/723] Linking static target lib/librte_hash.a 00:02:39.140 [180/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.398 [181/723] Linking target lib/librte_eal.so.24.2 00:02:39.398 [182/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:39.398 [183/723] Linking target lib/librte_ring.so.24.2 00:02:39.656 [184/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:39.656 [185/723] Linking target lib/librte_rcu.so.24.2 00:02:39.656 [186/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:39.914 [187/723] Linking target lib/librte_mempool.so.24.2 00:02:39.914 [188/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:39.914 [189/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.914 [190/723] Linking target lib/librte_meter.so.24.2 00:02:39.914 [191/723] Linking target lib/librte_pci.so.24.2 00:02:39.914 [192/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:39.914 [193/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:39.914 [194/723] Linking static target lib/acl/libavx2_tmp.a 00:02:39.914 [195/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.914 [196/723] Linking target lib/librte_timer.so.24.2 00:02:39.914 [197/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:39.914 [198/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:39.914 [199/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:39.914 [200/723] Linking target lib/librte_mbuf.so.24.2 00:02:40.182 [201/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:40.182 [202/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:40.182 [203/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:40.182 [204/723] Linking target lib/librte_net.so.24.2 00:02:40.182 [205/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:40.182 [206/723] Linking static target lib/acl/libavx512_tmp.a 00:02:40.182 [207/723] Linking target lib/librte_bbdev.so.24.2 00:02:40.441 [208/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:40.441 [209/723] Linking static target lib/librte_acl.a 00:02:40.441 [210/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:40.441 [211/723] Linking target lib/librte_cmdline.so.24.2 00:02:40.441 [212/723] Linking target lib/librte_hash.so.24.2 00:02:40.441 [213/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:40.699 [214/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:40.699 [215/723] Linking static target lib/librte_cfgfile.a 00:02:40.699 [216/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:40.699 [217/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.699 [218/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:40.699 [219/723] Linking target lib/librte_acl.so.24.2 00:02:40.957 [220/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:40.957 [221/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:40.957 [222/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.957 [223/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:40.957 [224/723] Linking target lib/librte_cfgfile.so.24.2 00:02:41.215 [225/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:41.215 [226/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:41.473 [227/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:41.473 [228/723] Linking static target lib/librte_bpf.a 00:02:41.473 [229/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:41.473 [230/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:41.473 [231/723] Linking static target lib/librte_compressdev.a 00:02:41.731 [232/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:41.731 [233/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.731 [234/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:41.989 [235/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:41.989 [236/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:41.989 [237/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:41.989 [238/723] Linking static target lib/librte_distributor.a 00:02:41.989 [239/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.248 [240/723] Linking target lib/librte_compressdev.so.24.2 00:02:42.248 [241/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:42.506 [242/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.506 [243/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:42.506 [244/723] Linking target lib/librte_distributor.so.24.2 00:02:42.506 [245/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:42.506 [246/723] Linking static target lib/librte_dmadev.a 00:02:42.765 [247/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:43.023 [248/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.023 [249/723] Linking target lib/librte_dmadev.so.24.2 00:02:43.023 [250/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:43.023 [251/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:43.282 [252/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:43.282 [253/723] Linking static target lib/librte_efd.a 00:02:43.540 [254/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:43.540 [255/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.540 [256/723] Linking static target lib/librte_cryptodev.a 00:02:43.540 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:43.540 [258/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.798 [259/723] Linking target lib/librte_efd.so.24.2 00:02:44.057 [260/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:44.057 [261/723] Linking static target lib/librte_dispatcher.a 00:02:44.057 [262/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:44.315 [263/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.315 [264/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:44.315 [265/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:44.315 [266/723] Linking static target lib/librte_gpudev.a 00:02:44.315 [267/723] Linking target lib/librte_ethdev.so.24.2 00:02:44.574 [268/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.574 [269/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:44.574 [270/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:44.574 [271/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:44.574 [272/723] Linking target lib/librte_metrics.so.24.2 00:02:44.574 [273/723] Linking target lib/librte_bpf.so.24.2 00:02:44.832 [274/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:44.832 [275/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:44.832 [276/723] Linking target lib/librte_bitratestats.so.24.2 00:02:44.832 [277/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:45.091 [278/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.091 [279/723] Linking target lib/librte_cryptodev.so.24.2 00:02:45.091 [280/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:45.091 [281/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:45.091 [282/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:45.350 [283/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.350 [284/723] Linking target lib/librte_gpudev.so.24.2 00:02:45.350 [285/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:45.350 [286/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:45.350 [287/723] Linking static target lib/librte_gro.a 00:02:45.608 [288/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:45.608 [289/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:45.608 [290/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:45.608 [291/723] Linking static target lib/librte_eventdev.a 00:02:45.608 [292/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.608 [293/723] Linking target lib/librte_gro.so.24.2 00:02:45.867 [294/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:45.867 [295/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:45.867 [296/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:45.867 [297/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:45.867 [298/723] Linking static target lib/librte_gso.a 00:02:46.126 [299/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.126 [300/723] Linking target lib/librte_gso.so.24.2 00:02:46.126 [301/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:46.126 [302/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:46.384 [303/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:46.384 [304/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:46.384 [305/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:46.384 [306/723] Linking static target lib/librte_jobstats.a 00:02:46.384 [307/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:46.643 [308/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:46.643 [309/723] Linking static target lib/librte_latencystats.a 00:02:46.643 [310/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:46.643 [311/723] Linking static target lib/librte_ip_frag.a 00:02:46.901 [312/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.901 [313/723] Linking target lib/librte_jobstats.so.24.2 00:02:46.901 [314/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.901 [315/723] Linking target lib/librte_latencystats.so.24.2 00:02:46.901 [316/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:46.901 [317/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:46.901 [318/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.901 [319/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:46.901 [320/723] Linking target lib/librte_ip_frag.so.24.2 00:02:46.901 [321/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:47.182 [322/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:47.182 [323/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.182 [324/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:47.182 [325/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.455 [326/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:47.455 [327/723] Linking static target lib/librte_lpm.a 00:02:47.714 [328/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:47.714 [329/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.714 [330/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.973 [331/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:47.973 [332/723] Linking target lib/librte_eventdev.so.24.2 00:02:47.973 [333/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:47.973 [334/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.973 [335/723] Linking static target lib/librte_pcapng.a 00:02:47.973 [336/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.973 [337/723] Linking target lib/librte_lpm.so.24.2 00:02:47.973 [338/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:47.973 [339/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:47.973 [340/723] Linking target lib/librte_dispatcher.so.24.2 00:02:47.973 [341/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:47.973 [342/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:48.232 [343/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.232 [344/723] Linking target lib/librte_pcapng.so.24.2 00:02:48.232 [345/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:48.232 [346/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:48.490 [347/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:48.490 [348/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.748 [349/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:48.748 [350/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:48.748 [351/723] Linking static target lib/librte_power.a 00:02:48.748 [352/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:48.748 [353/723] Linking static target lib/librte_member.a 00:02:49.007 [354/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:49.007 [355/723] Linking static target lib/librte_regexdev.a 00:02:49.007 [356/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:49.007 [357/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:49.007 [358/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:49.007 [359/723] Linking static target lib/librte_rawdev.a 00:02:49.265 [360/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:49.265 [361/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.265 [362/723] Linking target lib/librte_member.so.24.2 00:02:49.265 [363/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:49.265 [364/723] Linking static target lib/librte_mldev.a 00:02:49.265 [365/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:49.524 [366/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:49.524 [367/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.524 [368/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.524 [369/723] Linking target lib/librte_rawdev.so.24.2 00:02:49.524 [370/723] Linking target lib/librte_power.so.24.2 00:02:49.782 [371/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.782 [372/723] Linking target lib/librte_regexdev.so.24.2 00:02:49.782 [373/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:49.782 [374/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.782 [375/723] Linking static target lib/librte_reorder.a 00:02:50.040 [376/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:50.040 [377/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:50.040 [378/723] Linking static target lib/librte_rib.a 00:02:50.040 [379/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:50.040 [380/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:50.040 [381/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.298 [382/723] Linking target lib/librte_reorder.so.24.2 00:02:50.298 [383/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:50.299 [384/723] Linking static target lib/librte_stack.a 00:02:50.299 [385/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:50.299 [386/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.299 [387/723] Linking static target lib/librte_security.a 00:02:50.557 [388/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.557 [389/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:50.557 [390/723] Linking target lib/librte_rib.so.24.2 00:02:50.557 [391/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.557 [392/723] Linking target lib/librte_stack.so.24.2 00:02:50.557 [393/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:50.815 [394/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.815 [395/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.815 [396/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.815 [397/723] Linking target lib/librte_security.so.24.2 00:02:50.815 [398/723] Linking target lib/librte_mldev.so.24.2 00:02:50.815 [399/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:50.815 [400/723] Linking static target lib/librte_sched.a 00:02:50.815 [401/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.074 [402/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:51.074 [403/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.332 [404/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.332 [405/723] Linking target lib/librte_sched.so.24.2 00:02:51.591 [406/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:51.591 [407/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.591 [408/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.903 [409/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.160 [410/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:52.160 [411/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:52.160 [412/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.418 [413/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:52.677 [414/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:52.677 [415/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:52.677 [416/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:52.936 [417/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:52.936 [418/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:52.936 [419/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:52.936 [420/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:52.936 [421/723] Linking static target lib/librte_ipsec.a 00:02:53.503 [422/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.503 [423/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:53.503 [424/723] Linking target lib/librte_ipsec.so.24.2 00:02:53.503 [425/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:53.503 [426/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:53.503 [427/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:53.503 [428/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:53.503 [429/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:53.503 [430/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:53.503 [431/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:53.503 [432/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:54.436 [433/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:54.436 [434/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:54.436 [435/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:54.436 [436/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:54.436 [437/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:54.436 [438/723] Linking static target lib/librte_fib.a 00:02:54.694 [439/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:54.694 [440/723] Linking static target lib/librte_pdcp.a 00:02:54.694 [441/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:54.953 [442/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:54.953 [443/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.953 [444/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.953 [445/723] Linking target lib/librte_fib.so.24.2 00:02:54.953 [446/723] Linking target lib/librte_pdcp.so.24.2 00:02:55.518 [447/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:55.518 [448/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:55.518 [449/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:55.777 [450/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:55.777 [451/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:55.777 [452/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:56.035 [453/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:56.035 [454/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:56.293 [455/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:56.293 [456/723] Linking static target lib/librte_port.a 00:02:56.550 [457/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:56.550 [458/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:56.550 [459/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:56.550 [460/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:56.812 [461/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:56.812 [462/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.812 [463/723] Linking target lib/librte_port.so.24.2 00:02:56.812 [464/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:56.812 [465/723] Linking static target lib/librte_pdump.a 00:02:56.812 [466/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:56.812 [467/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:57.072 [468/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.072 [469/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:57.072 [470/723] Linking target lib/librte_pdump.so.24.2 00:02:57.638 [471/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:57.638 [472/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:57.638 [473/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:57.638 [474/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:57.905 [475/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:57.905 [476/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:57.905 [477/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:57.905 [478/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.163 [479/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:58.163 [480/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:58.163 [481/723] Linking static target lib/librte_table.a 00:02:58.422 [482/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:58.680 [483/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:58.939 [484/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.939 [485/723] Linking target lib/librte_table.so.24.2 00:02:58.939 [486/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:58.939 [487/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:59.198 [488/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:59.457 [489/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:59.457 [490/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:59.715 [491/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:59.715 [492/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:59.973 [493/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:59.973 [494/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:59.973 [495/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:59.973 [496/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:00.231 [497/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:00.489 [498/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:00.747 [499/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:00.747 [500/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:00.747 [501/723] Linking static target lib/librte_graph.a 00:03:00.747 [502/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:00.747 [503/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:01.313 [504/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:01.313 [505/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:01.573 [506/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.573 [507/723] Linking target lib/librte_graph.so.24.2 00:03:01.573 [508/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:03:01.831 [509/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:01.831 [510/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:01.831 [511/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:01.831 [512/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:01.831 [513/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:02.090 [514/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:02.090 [515/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.349 [516/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:02.349 [517/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:02.608 [518/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.608 [519/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:02.608 [520/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.608 [521/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.867 [522/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.867 [523/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:02.867 [524/723] Linking static target lib/librte_node.a 00:03:03.126 [525/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:03.126 [526/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.384 [527/723] Linking target lib/librte_node.so.24.2 00:03:03.384 [528/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:03.384 [529/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:03.384 [530/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:03.384 [531/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:03.643 [532/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:03.643 [533/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.643 [534/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:03.643 [535/723] Linking static target drivers/librte_bus_vdev.a 00:03:03.643 [536/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.643 [537/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.643 [538/723] Linking static target drivers/librte_bus_pci.a 00:03:03.902 [539/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:03.902 [540/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.902 [541/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:03.902 [542/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.902 [543/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:03.902 [544/723] Linking target drivers/librte_bus_vdev.so.24.2 00:03:04.161 [545/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.161 [546/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.161 [547/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:03:04.161 [548/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.161 [549/723] Linking target drivers/librte_bus_pci.so.24.2 00:03:04.161 [550/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:04.161 [551/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.161 [552/723] Linking static target drivers/librte_mempool_ring.a 00:03:04.161 [553/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.420 [554/723] Linking target drivers/librte_mempool_ring.so.24.2 00:03:04.420 [555/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:03:04.420 [556/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:04.987 [557/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:05.246 [558/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:05.246 [559/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:05.246 [560/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:05.246 [561/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:06.182 [562/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:06.182 [563/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:06.182 [564/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:06.182 [565/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:06.440 [566/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:06.441 [567/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:06.699 [568/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:06.957 [569/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:06.958 [570/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:07.216 [571/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:07.216 [572/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:07.216 [573/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:07.818 [574/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:07.818 [575/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:08.077 [576/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:08.336 [577/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:08.336 [578/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:08.336 [579/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:08.336 [580/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:08.595 [581/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:08.595 [582/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:08.595 [583/723] Linking static target lib/librte_vhost.a 00:03:08.853 [584/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:08.853 [585/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:09.112 [586/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:09.112 [587/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:09.112 [588/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:09.112 [589/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:09.112 [590/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:09.371 [591/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:09.371 [592/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:09.371 [593/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:09.630 [594/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:09.630 [595/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:09.630 [596/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:09.888 [597/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:09.888 [598/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:09.888 [599/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:09.888 [600/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:09.888 [601/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:09.888 [602/723] Linking static target drivers/librte_net_i40e.a 00:03:09.888 [603/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.888 [604/723] Linking target lib/librte_vhost.so.24.2 00:03:10.145 [605/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:10.403 [606/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.403 [607/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:10.662 [608/723] Linking target drivers/librte_net_i40e.so.24.2 00:03:10.662 [609/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:10.662 [610/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:10.921 [611/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:10.921 [612/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:11.180 [613/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:11.439 [614/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:11.439 [615/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:11.439 [616/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:11.698 [617/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:11.698 [618/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:11.698 [619/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:12.265 [620/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:12.265 [621/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:12.265 [622/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:12.265 [623/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:12.265 [624/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:12.265 [625/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:12.524 [626/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:12.524 [627/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:12.524 [628/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:12.524 [629/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:12.782 [630/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:13.040 [631/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:13.299 [632/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:13.299 [633/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:13.299 [634/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:13.299 [635/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:14.235 [636/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:14.235 [637/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:14.235 [638/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:14.235 [639/723] Linking static target lib/librte_pipeline.a 00:03:14.235 [640/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:14.494 [641/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:14.494 [642/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:14.494 [643/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:14.494 [644/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:14.753 [645/723] Linking target app/dpdk-dumpcap 00:03:14.753 [646/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:15.013 [647/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:15.013 [648/723] Linking target app/dpdk-graph 00:03:15.013 [649/723] Linking target app/dpdk-pdump 00:03:15.013 [650/723] Linking target app/dpdk-proc-info 00:03:15.272 [651/723] Linking target app/dpdk-test-acl 00:03:15.272 [652/723] Linking target app/dpdk-test-cmdline 00:03:15.530 [653/723] Linking target app/dpdk-test-compress-perf 00:03:15.530 [654/723] Linking target app/dpdk-test-crypto-perf 00:03:15.530 [655/723] Linking target app/dpdk-test-dma-perf 00:03:15.530 [656/723] Linking target app/dpdk-test-fib 00:03:15.530 [657/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:15.789 [658/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:15.789 [659/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:16.049 [660/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:16.049 [661/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:16.049 [662/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:16.049 [663/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:16.307 [664/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:16.307 [665/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:16.307 [666/723] Linking target app/dpdk-test-gpudev 00:03:16.566 [667/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:16.566 [668/723] Linking target app/dpdk-test-eventdev 00:03:16.566 [669/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:16.825 [670/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:16.825 [671/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:16.825 [672/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:16.825 [673/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:17.084 [674/723] Linking target app/dpdk-test-flow-perf 00:03:17.084 [675/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:17.084 [676/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.084 [677/723] Linking target lib/librte_pipeline.so.24.2 00:03:17.342 [678/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:17.343 [679/723] Linking target app/dpdk-test-bbdev 00:03:17.343 [680/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:17.343 [681/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:17.601 [682/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:17.601 [683/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:17.601 [684/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:17.601 [685/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:18.168 [686/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:18.168 [687/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:18.168 [688/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:18.168 [689/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:18.427 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:18.427 [691/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:18.686 [692/723] Linking target app/dpdk-test-mldev 00:03:18.686 [693/723] Linking target app/dpdk-test-pipeline 00:03:18.686 [694/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:18.945 [695/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:19.205 [696/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:19.463 [697/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:19.463 [698/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:19.463 [699/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:19.463 [700/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:19.722 [701/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:19.981 [702/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:19.981 [703/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:19.981 [704/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:19.981 [705/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:20.548 [706/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:20.806 [707/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:20.806 [708/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:21.065 [709/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:21.065 [710/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:21.065 [711/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:21.323 [712/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:21.323 [713/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:21.323 [714/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:21.582 [715/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:21.582 [716/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:21.582 [717/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:21.582 [718/723] Linking target app/dpdk-test-regex 00:03:21.840 [719/723] Linking target app/dpdk-test-sad 00:03:21.840 [720/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:22.098 [721/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:22.357 [722/723] Linking target app/dpdk-testpmd 00:03:22.615 [723/723] Linking target app/dpdk-test-security-perf 00:03:22.615 03:59:15 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:22.615 03:59:15 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:22.615 03:59:15 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:22.874 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:22.874 [0/1] Installing files. 00:03:23.135 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:23.135 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.135 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.136 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.137 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.138 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.139 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.140 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.399 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.399 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.399 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:23.399 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:23.399 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:23.399 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:23.399 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.399 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.400 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:23.661 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:23.661 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:23.661 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.661 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:23.661 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.661 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.662 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.663 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:23.664 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:23.664 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:23.665 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:23.665 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:23.665 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:23.665 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:03:23.665 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:23.665 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:23.665 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:23.665 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:23.665 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:23.665 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:23.665 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:23.665 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:23.665 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:23.665 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:23.665 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:23.665 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:23.665 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:23.665 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:23.665 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:23.665 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:23.665 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:23.665 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:23.665 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:23.665 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:23.665 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:23.665 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:23.665 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:23.665 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:23.665 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:23.665 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:23.665 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:23.665 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:23.665 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:23.665 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:23.665 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:23.665 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:23.665 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:23.665 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:23.665 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:23.665 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:23.665 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:23.665 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:23.665 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:23.665 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:23.665 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:23.665 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:23.665 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:23.665 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:23.665 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:23.665 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:23.665 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:23.665 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:23.665 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:23.665 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:23.665 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:23.665 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:23.665 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:23.665 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:23.665 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:23.665 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:23.665 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:23.665 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:23.665 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:23.665 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:23.665 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:23.665 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:23.665 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:23.665 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:23.665 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:23.665 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:23.665 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:23.665 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:23.665 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:23.665 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:23.665 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:23.665 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:23.665 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:23.665 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:23.665 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:23.665 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:23.665 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:23.665 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:23.665 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:23.665 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:23.665 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:23.665 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:23.665 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:23.665 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:23.665 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:23.665 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:23.665 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:23.665 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:23.665 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:23.665 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:23.665 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:23.665 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:23.665 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:23.665 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:23.665 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:23.665 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:23.665 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:23.665 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:23.665 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:23.665 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:23.665 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:23.665 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:23.665 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:23.665 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:23.666 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:23.666 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:23.666 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:23.666 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:23.666 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:23.666 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:23.666 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:23.666 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:23.666 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:23.666 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:23.666 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:23.666 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:23.666 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:23.666 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:23.666 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:23.666 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:23.666 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:23.666 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:23.666 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:23.666 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:23.666 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:23.666 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:23.666 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:23.666 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:23.666 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:23.666 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:23.924 03:59:17 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:23.924 ************************************ 00:03:23.924 END TEST build_native_dpdk 00:03:23.924 ************************************ 00:03:23.924 03:59:17 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:23.924 00:03:23.925 real 1m2.979s 00:03:23.925 user 7m39.011s 00:03:23.925 sys 1m15.159s 00:03:23.925 03:59:17 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:23.925 03:59:17 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:23.925 03:59:17 -- common/autotest_common.sh@1142 -- $ return 0 00:03:23.925 03:59:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:23.925 03:59:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:23.925 03:59:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:23.925 03:59:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:23.925 03:59:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:23.925 03:59:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:23.925 03:59:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:23.925 03:59:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:23.925 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:24.213 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.213 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:24.213 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:24.472 Using 'verbs' RDMA provider 00:03:40.723 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:52.932 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:52.932 Creating mk/config.mk...done. 00:03:52.932 Creating mk/cc.flags.mk...done. 00:03:52.932 Type 'make' to build. 00:03:52.932 03:59:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:52.932 03:59:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:52.932 03:59:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:52.932 03:59:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:52.932 ************************************ 00:03:52.932 START TEST make 00:03:52.932 ************************************ 00:03:52.932 03:59:45 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:52.932 make[1]: Nothing to be done for 'all'. 00:04:19.473 CC lib/log/log.o 00:04:19.473 CC lib/log/log_flags.o 00:04:19.473 CC lib/ut/ut.o 00:04:19.473 CC lib/log/log_deprecated.o 00:04:19.473 CC lib/ut_mock/mock.o 00:04:19.473 LIB libspdk_ut.a 00:04:19.473 LIB libspdk_log.a 00:04:19.473 SO libspdk_ut.so.2.0 00:04:19.473 LIB libspdk_ut_mock.a 00:04:19.473 SO libspdk_log.so.7.0 00:04:19.473 SO libspdk_ut_mock.so.6.0 00:04:19.473 SYMLINK libspdk_ut.so 00:04:19.473 SYMLINK libspdk_log.so 00:04:19.473 SYMLINK libspdk_ut_mock.so 00:04:19.473 CC lib/util/base64.o 00:04:19.473 CC lib/util/bit_array.o 00:04:19.473 CC lib/util/cpuset.o 00:04:19.473 CC lib/util/crc16.o 00:04:19.473 CC lib/util/crc32.o 00:04:19.473 CC lib/util/crc32c.o 00:04:19.473 CC lib/ioat/ioat.o 00:04:19.473 CC lib/dma/dma.o 00:04:19.473 CXX lib/trace_parser/trace.o 00:04:19.473 CC lib/vfio_user/host/vfio_user_pci.o 00:04:19.473 CC lib/vfio_user/host/vfio_user.o 00:04:19.473 CC lib/util/crc32_ieee.o 00:04:19.473 CC lib/util/crc64.o 00:04:19.473 CC lib/util/dif.o 00:04:19.473 CC lib/util/fd.o 00:04:19.473 LIB libspdk_dma.a 00:04:19.473 CC lib/util/fd_group.o 00:04:19.473 SO libspdk_dma.so.4.0 00:04:19.473 LIB libspdk_ioat.a 00:04:19.473 SYMLINK libspdk_dma.so 00:04:19.473 CC lib/util/file.o 00:04:19.473 CC lib/util/hexlify.o 00:04:19.473 SO libspdk_ioat.so.7.0 00:04:19.473 CC lib/util/iov.o 00:04:19.473 CC lib/util/math.o 00:04:19.473 CC lib/util/net.o 00:04:19.473 SYMLINK libspdk_ioat.so 00:04:19.473 CC lib/util/pipe.o 00:04:19.473 LIB libspdk_vfio_user.a 00:04:19.473 SO libspdk_vfio_user.so.5.0 00:04:19.473 CC lib/util/strerror_tls.o 00:04:19.473 CC lib/util/string.o 00:04:19.473 CC lib/util/uuid.o 00:04:19.473 SYMLINK libspdk_vfio_user.so 00:04:19.474 CC lib/util/xor.o 00:04:19.474 CC lib/util/zipf.o 00:04:19.474 LIB libspdk_util.a 00:04:19.474 SO libspdk_util.so.10.0 00:04:19.474 SYMLINK libspdk_util.so 00:04:19.474 LIB libspdk_trace_parser.a 00:04:19.474 SO libspdk_trace_parser.so.5.0 00:04:19.474 SYMLINK libspdk_trace_parser.so 00:04:19.474 CC lib/rdma_utils/rdma_utils.o 00:04:19.474 CC lib/json/json_parse.o 00:04:19.474 CC lib/idxd/idxd.o 00:04:19.474 CC lib/conf/conf.o 00:04:19.474 CC lib/idxd/idxd_user.o 00:04:19.474 CC lib/json/json_util.o 00:04:19.474 CC lib/idxd/idxd_kernel.o 00:04:19.474 CC lib/rdma_provider/common.o 00:04:19.474 CC lib/vmd/vmd.o 00:04:19.474 CC lib/env_dpdk/env.o 00:04:19.474 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.474 CC lib/env_dpdk/memory.o 00:04:19.474 LIB libspdk_conf.a 00:04:19.474 CC lib/env_dpdk/pci.o 00:04:19.474 CC lib/json/json_write.o 00:04:19.474 CC lib/env_dpdk/init.o 00:04:19.474 SO libspdk_conf.so.6.0 00:04:19.474 LIB libspdk_rdma_utils.a 00:04:19.474 SO libspdk_rdma_utils.so.1.0 00:04:19.474 SYMLINK libspdk_conf.so 00:04:19.474 CC lib/env_dpdk/threads.o 00:04:19.474 SYMLINK libspdk_rdma_utils.so 00:04:19.474 CC lib/env_dpdk/pci_ioat.o 00:04:19.474 LIB libspdk_rdma_provider.a 00:04:19.474 SO libspdk_rdma_provider.so.6.0 00:04:19.474 SYMLINK libspdk_rdma_provider.so 00:04:19.474 CC lib/env_dpdk/pci_virtio.o 00:04:19.474 CC lib/env_dpdk/pci_vmd.o 00:04:19.474 CC lib/vmd/led.o 00:04:19.474 LIB libspdk_json.a 00:04:19.474 LIB libspdk_idxd.a 00:04:19.474 SO libspdk_json.so.6.0 00:04:19.474 CC lib/env_dpdk/pci_idxd.o 00:04:19.474 SO libspdk_idxd.so.12.0 00:04:19.474 CC lib/env_dpdk/pci_event.o 00:04:19.474 CC lib/env_dpdk/sigbus_handler.o 00:04:19.474 CC lib/env_dpdk/pci_dpdk.o 00:04:19.474 SYMLINK libspdk_json.so 00:04:19.474 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.474 LIB libspdk_vmd.a 00:04:19.474 SYMLINK libspdk_idxd.so 00:04:19.474 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.474 SO libspdk_vmd.so.6.0 00:04:19.474 SYMLINK libspdk_vmd.so 00:04:19.474 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.474 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.474 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.474 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.474 LIB libspdk_jsonrpc.a 00:04:19.474 SO libspdk_jsonrpc.so.6.0 00:04:19.474 SYMLINK libspdk_jsonrpc.so 00:04:19.474 LIB libspdk_env_dpdk.a 00:04:19.474 CC lib/rpc/rpc.o 00:04:19.474 SO libspdk_env_dpdk.so.15.0 00:04:19.474 SYMLINK libspdk_env_dpdk.so 00:04:19.474 LIB libspdk_rpc.a 00:04:19.474 SO libspdk_rpc.so.6.0 00:04:19.732 SYMLINK libspdk_rpc.so 00:04:19.990 CC lib/keyring/keyring_rpc.o 00:04:19.990 CC lib/keyring/keyring.o 00:04:19.990 CC lib/trace/trace.o 00:04:19.990 CC lib/trace/trace_flags.o 00:04:19.990 CC lib/trace/trace_rpc.o 00:04:19.990 CC lib/notify/notify_rpc.o 00:04:19.990 CC lib/notify/notify.o 00:04:19.990 LIB libspdk_notify.a 00:04:20.248 SO libspdk_notify.so.6.0 00:04:20.248 LIB libspdk_trace.a 00:04:20.248 LIB libspdk_keyring.a 00:04:20.248 SYMLINK libspdk_notify.so 00:04:20.248 SO libspdk_trace.so.10.0 00:04:20.248 SO libspdk_keyring.so.1.0 00:04:20.248 SYMLINK libspdk_trace.so 00:04:20.248 SYMLINK libspdk_keyring.so 00:04:20.506 CC lib/thread/thread.o 00:04:20.506 CC lib/thread/iobuf.o 00:04:20.506 CC lib/sock/sock_rpc.o 00:04:20.506 CC lib/sock/sock.o 00:04:21.072 LIB libspdk_sock.a 00:04:21.072 SO libspdk_sock.so.10.0 00:04:21.072 SYMLINK libspdk_sock.so 00:04:21.330 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:21.330 CC lib/nvme/nvme_ctrlr.o 00:04:21.330 CC lib/nvme/nvme_fabric.o 00:04:21.330 CC lib/nvme/nvme_ns_cmd.o 00:04:21.330 CC lib/nvme/nvme_pcie_common.o 00:04:21.330 CC lib/nvme/nvme_ns.o 00:04:21.330 CC lib/nvme/nvme_pcie.o 00:04:21.330 CC lib/nvme/nvme_qpair.o 00:04:21.330 CC lib/nvme/nvme.o 00:04:22.265 LIB libspdk_thread.a 00:04:22.265 CC lib/nvme/nvme_quirks.o 00:04:22.265 SO libspdk_thread.so.10.1 00:04:22.265 CC lib/nvme/nvme_transport.o 00:04:22.265 CC lib/nvme/nvme_discovery.o 00:04:22.265 SYMLINK libspdk_thread.so 00:04:22.265 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.265 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:22.524 CC lib/accel/accel.o 00:04:22.524 CC lib/accel/accel_rpc.o 00:04:22.524 CC lib/accel/accel_sw.o 00:04:22.782 CC lib/blob/blobstore.o 00:04:22.782 CC lib/nvme/nvme_tcp.o 00:04:23.040 CC lib/blob/request.o 00:04:23.040 CC lib/nvme/nvme_opal.o 00:04:23.040 CC lib/init/json_config.o 00:04:23.040 CC lib/blob/zeroes.o 00:04:23.040 CC lib/virtio/virtio.o 00:04:23.040 CC lib/virtio/virtio_vhost_user.o 00:04:23.299 CC lib/blob/blob_bs_dev.o 00:04:23.299 CC lib/init/subsystem.o 00:04:23.299 CC lib/init/subsystem_rpc.o 00:04:23.299 CC lib/virtio/virtio_vfio_user.o 00:04:23.299 CC lib/virtio/virtio_pci.o 00:04:23.558 CC lib/nvme/nvme_io_msg.o 00:04:23.558 CC lib/init/rpc.o 00:04:23.558 CC lib/nvme/nvme_poll_group.o 00:04:23.558 CC lib/nvme/nvme_zns.o 00:04:23.558 LIB libspdk_accel.a 00:04:23.558 CC lib/nvme/nvme_stubs.o 00:04:23.558 SO libspdk_accel.so.16.0 00:04:23.558 LIB libspdk_init.a 00:04:23.558 LIB libspdk_virtio.a 00:04:23.558 SO libspdk_init.so.5.0 00:04:23.817 SYMLINK libspdk_accel.so 00:04:23.818 CC lib/nvme/nvme_auth.o 00:04:23.818 SO libspdk_virtio.so.7.0 00:04:23.818 SYMLINK libspdk_init.so 00:04:23.818 CC lib/nvme/nvme_cuse.o 00:04:23.818 SYMLINK libspdk_virtio.so 00:04:24.076 CC lib/bdev/bdev.o 00:04:24.076 CC lib/event/app.o 00:04:24.076 CC lib/event/reactor.o 00:04:24.076 CC lib/event/log_rpc.o 00:04:24.076 CC lib/bdev/bdev_rpc.o 00:04:24.076 CC lib/bdev/bdev_zone.o 00:04:24.335 CC lib/event/app_rpc.o 00:04:24.335 CC lib/event/scheduler_static.o 00:04:24.335 CC lib/nvme/nvme_rdma.o 00:04:24.335 CC lib/bdev/part.o 00:04:24.335 CC lib/bdev/scsi_nvme.o 00:04:24.593 LIB libspdk_event.a 00:04:24.593 SO libspdk_event.so.14.0 00:04:24.593 SYMLINK libspdk_event.so 00:04:25.566 LIB libspdk_blob.a 00:04:25.566 LIB libspdk_nvme.a 00:04:25.566 SO libspdk_blob.so.11.0 00:04:25.840 SYMLINK libspdk_blob.so 00:04:25.840 SO libspdk_nvme.so.13.1 00:04:25.840 CC lib/lvol/lvol.o 00:04:25.840 CC lib/blobfs/blobfs.o 00:04:25.840 CC lib/blobfs/tree.o 00:04:26.098 SYMLINK libspdk_nvme.so 00:04:26.356 LIB libspdk_bdev.a 00:04:26.356 SO libspdk_bdev.so.16.0 00:04:26.614 SYMLINK libspdk_bdev.so 00:04:26.614 CC lib/scsi/dev.o 00:04:26.614 CC lib/scsi/lun.o 00:04:26.614 CC lib/scsi/scsi.o 00:04:26.614 CC lib/scsi/port.o 00:04:26.614 CC lib/ublk/ublk.o 00:04:26.614 CC lib/nbd/nbd.o 00:04:26.614 CC lib/nvmf/ctrlr.o 00:04:26.614 CC lib/ftl/ftl_core.o 00:04:26.872 LIB libspdk_blobfs.a 00:04:26.872 SO libspdk_blobfs.so.10.0 00:04:26.872 LIB libspdk_lvol.a 00:04:26.872 CC lib/nvmf/ctrlr_discovery.o 00:04:26.872 SYMLINK libspdk_blobfs.so 00:04:26.872 CC lib/nbd/nbd_rpc.o 00:04:26.872 CC lib/scsi/scsi_bdev.o 00:04:26.872 SO libspdk_lvol.so.10.0 00:04:26.872 CC lib/ublk/ublk_rpc.o 00:04:27.130 SYMLINK libspdk_lvol.so 00:04:27.130 CC lib/ftl/ftl_init.o 00:04:27.130 CC lib/scsi/scsi_pr.o 00:04:27.130 CC lib/scsi/scsi_rpc.o 00:04:27.130 LIB libspdk_nbd.a 00:04:27.130 CC lib/scsi/task.o 00:04:27.130 SO libspdk_nbd.so.7.0 00:04:27.130 CC lib/nvmf/ctrlr_bdev.o 00:04:27.130 SYMLINK libspdk_nbd.so 00:04:27.130 CC lib/ftl/ftl_layout.o 00:04:27.130 CC lib/nvmf/subsystem.o 00:04:27.386 CC lib/ftl/ftl_debug.o 00:04:27.386 LIB libspdk_ublk.a 00:04:27.386 CC lib/nvmf/nvmf.o 00:04:27.386 SO libspdk_ublk.so.3.0 00:04:27.386 CC lib/ftl/ftl_io.o 00:04:27.386 CC lib/nvmf/nvmf_rpc.o 00:04:27.386 LIB libspdk_scsi.a 00:04:27.386 SYMLINK libspdk_ublk.so 00:04:27.386 CC lib/nvmf/transport.o 00:04:27.386 SO libspdk_scsi.so.9.0 00:04:27.386 CC lib/nvmf/tcp.o 00:04:27.644 CC lib/nvmf/stubs.o 00:04:27.644 SYMLINK libspdk_scsi.so 00:04:27.644 CC lib/ftl/ftl_sb.o 00:04:27.644 CC lib/ftl/ftl_l2p.o 00:04:27.902 CC lib/ftl/ftl_l2p_flat.o 00:04:27.902 CC lib/iscsi/conn.o 00:04:27.902 CC lib/iscsi/init_grp.o 00:04:27.902 CC lib/vhost/vhost.o 00:04:28.160 CC lib/ftl/ftl_nv_cache.o 00:04:28.160 CC lib/ftl/ftl_band.o 00:04:28.160 CC lib/ftl/ftl_band_ops.o 00:04:28.160 CC lib/nvmf/mdns_server.o 00:04:28.160 CC lib/iscsi/iscsi.o 00:04:28.418 CC lib/nvmf/rdma.o 00:04:28.418 CC lib/nvmf/auth.o 00:04:28.418 CC lib/ftl/ftl_writer.o 00:04:28.418 CC lib/iscsi/md5.o 00:04:28.677 CC lib/vhost/vhost_rpc.o 00:04:28.677 CC lib/vhost/vhost_scsi.o 00:04:28.677 CC lib/vhost/vhost_blk.o 00:04:28.677 CC lib/iscsi/param.o 00:04:28.677 CC lib/vhost/rte_vhost_user.o 00:04:28.935 CC lib/ftl/ftl_rq.o 00:04:28.935 CC lib/ftl/ftl_reloc.o 00:04:29.193 CC lib/ftl/ftl_l2p_cache.o 00:04:29.193 CC lib/ftl/ftl_p2l.o 00:04:29.193 CC lib/iscsi/portal_grp.o 00:04:29.193 CC lib/iscsi/tgt_node.o 00:04:29.451 CC lib/iscsi/iscsi_subsystem.o 00:04:29.451 CC lib/iscsi/iscsi_rpc.o 00:04:29.451 CC lib/ftl/mngt/ftl_mngt.o 00:04:29.451 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:29.710 CC lib/iscsi/task.o 00:04:29.710 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:29.710 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:29.710 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:29.710 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:29.710 LIB libspdk_vhost.a 00:04:29.710 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:29.968 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:29.968 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:29.968 SO libspdk_vhost.so.8.0 00:04:29.968 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:29.968 LIB libspdk_iscsi.a 00:04:29.968 SYMLINK libspdk_vhost.so 00:04:29.968 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:29.968 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:29.968 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:29.968 CC lib/ftl/utils/ftl_conf.o 00:04:29.968 SO libspdk_iscsi.so.8.0 00:04:29.968 CC lib/ftl/utils/ftl_md.o 00:04:30.226 CC lib/ftl/utils/ftl_mempool.o 00:04:30.226 CC lib/ftl/utils/ftl_bitmap.o 00:04:30.226 CC lib/ftl/utils/ftl_property.o 00:04:30.226 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:30.226 SYMLINK libspdk_iscsi.so 00:04:30.226 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:30.226 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:30.226 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:30.226 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:30.226 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:30.226 LIB libspdk_nvmf.a 00:04:30.485 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:30.485 SO libspdk_nvmf.so.19.0 00:04:30.485 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:30.485 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:30.485 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:30.485 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:30.485 CC lib/ftl/base/ftl_base_dev.o 00:04:30.485 CC lib/ftl/base/ftl_base_bdev.o 00:04:30.485 CC lib/ftl/ftl_trace.o 00:04:30.743 SYMLINK libspdk_nvmf.so 00:04:30.743 LIB libspdk_ftl.a 00:04:31.001 SO libspdk_ftl.so.9.0 00:04:31.568 SYMLINK libspdk_ftl.so 00:04:31.827 CC module/env_dpdk/env_dpdk_rpc.o 00:04:31.827 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:31.827 CC module/sock/posix/posix.o 00:04:31.827 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:31.827 CC module/accel/ioat/accel_ioat.o 00:04:31.827 CC module/accel/iaa/accel_iaa.o 00:04:31.827 CC module/accel/dsa/accel_dsa.o 00:04:31.827 CC module/accel/error/accel_error.o 00:04:31.827 CC module/keyring/file/keyring.o 00:04:31.827 CC module/blob/bdev/blob_bdev.o 00:04:31.827 LIB libspdk_env_dpdk_rpc.a 00:04:31.827 SO libspdk_env_dpdk_rpc.so.6.0 00:04:31.827 SYMLINK libspdk_env_dpdk_rpc.so 00:04:31.827 CC module/accel/dsa/accel_dsa_rpc.o 00:04:32.084 LIB libspdk_scheduler_dpdk_governor.a 00:04:32.084 CC module/keyring/file/keyring_rpc.o 00:04:32.084 CC module/accel/error/accel_error_rpc.o 00:04:32.084 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:32.084 CC module/accel/ioat/accel_ioat_rpc.o 00:04:32.084 CC module/accel/iaa/accel_iaa_rpc.o 00:04:32.084 LIB libspdk_scheduler_dynamic.a 00:04:32.084 SO libspdk_scheduler_dynamic.so.4.0 00:04:32.084 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:32.084 LIB libspdk_accel_dsa.a 00:04:32.084 LIB libspdk_blob_bdev.a 00:04:32.084 SYMLINK libspdk_scheduler_dynamic.so 00:04:32.084 SO libspdk_blob_bdev.so.11.0 00:04:32.084 SO libspdk_accel_dsa.so.5.0 00:04:32.084 LIB libspdk_accel_ioat.a 00:04:32.084 LIB libspdk_keyring_file.a 00:04:32.084 LIB libspdk_accel_error.a 00:04:32.084 LIB libspdk_accel_iaa.a 00:04:32.084 SO libspdk_keyring_file.so.1.0 00:04:32.084 SO libspdk_accel_error.so.2.0 00:04:32.084 SO libspdk_accel_ioat.so.6.0 00:04:32.084 SYMLINK libspdk_blob_bdev.so 00:04:32.084 SYMLINK libspdk_accel_dsa.so 00:04:32.343 SO libspdk_accel_iaa.so.3.0 00:04:32.343 SYMLINK libspdk_accel_error.so 00:04:32.343 SYMLINK libspdk_accel_ioat.so 00:04:32.343 SYMLINK libspdk_keyring_file.so 00:04:32.343 CC module/scheduler/gscheduler/gscheduler.o 00:04:32.343 SYMLINK libspdk_accel_iaa.so 00:04:32.343 CC module/sock/uring/uring.o 00:04:32.343 CC module/keyring/linux/keyring.o 00:04:32.343 CC module/keyring/linux/keyring_rpc.o 00:04:32.343 LIB libspdk_scheduler_gscheduler.a 00:04:32.343 SO libspdk_scheduler_gscheduler.so.4.0 00:04:32.343 LIB libspdk_keyring_linux.a 00:04:32.601 CC module/bdev/delay/vbdev_delay.o 00:04:32.601 CC module/bdev/gpt/gpt.o 00:04:32.601 CC module/bdev/error/vbdev_error.o 00:04:32.601 CC module/blobfs/bdev/blobfs_bdev.o 00:04:32.601 CC module/bdev/lvol/vbdev_lvol.o 00:04:32.601 SO libspdk_keyring_linux.so.1.0 00:04:32.601 LIB libspdk_sock_posix.a 00:04:32.601 SYMLINK libspdk_scheduler_gscheduler.so 00:04:32.601 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:32.601 SO libspdk_sock_posix.so.6.0 00:04:32.601 SYMLINK libspdk_keyring_linux.so 00:04:32.601 CC module/bdev/gpt/vbdev_gpt.o 00:04:32.601 CC module/bdev/malloc/bdev_malloc.o 00:04:32.601 SYMLINK libspdk_sock_posix.so 00:04:32.601 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:32.601 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:32.601 CC module/bdev/error/vbdev_error_rpc.o 00:04:32.859 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:32.859 LIB libspdk_bdev_error.a 00:04:32.859 LIB libspdk_blobfs_bdev.a 00:04:32.859 SO libspdk_bdev_error.so.6.0 00:04:32.859 SO libspdk_blobfs_bdev.so.6.0 00:04:32.859 LIB libspdk_bdev_gpt.a 00:04:32.859 CC module/bdev/null/bdev_null.o 00:04:32.859 SO libspdk_bdev_gpt.so.6.0 00:04:32.859 SYMLINK libspdk_blobfs_bdev.so 00:04:32.859 SYMLINK libspdk_bdev_error.so 00:04:32.859 CC module/bdev/null/bdev_null_rpc.o 00:04:32.859 LIB libspdk_bdev_malloc.a 00:04:32.859 LIB libspdk_bdev_delay.a 00:04:33.118 LIB libspdk_bdev_lvol.a 00:04:33.118 SYMLINK libspdk_bdev_gpt.so 00:04:33.118 LIB libspdk_sock_uring.a 00:04:33.118 CC module/bdev/nvme/bdev_nvme.o 00:04:33.118 SO libspdk_bdev_delay.so.6.0 00:04:33.118 SO libspdk_bdev_malloc.so.6.0 00:04:33.118 SO libspdk_bdev_lvol.so.6.0 00:04:33.118 SO libspdk_sock_uring.so.5.0 00:04:33.118 SYMLINK libspdk_bdev_delay.so 00:04:33.118 SYMLINK libspdk_bdev_malloc.so 00:04:33.118 CC module/bdev/passthru/vbdev_passthru.o 00:04:33.118 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:33.118 SYMLINK libspdk_sock_uring.so 00:04:33.118 SYMLINK libspdk_bdev_lvol.so 00:04:33.118 CC module/bdev/raid/bdev_raid.o 00:04:33.118 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:33.118 CC module/bdev/split/vbdev_split.o 00:04:33.118 LIB libspdk_bdev_null.a 00:04:33.118 SO libspdk_bdev_null.so.6.0 00:04:33.377 CC module/bdev/uring/bdev_uring.o 00:04:33.377 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:33.377 CC module/bdev/aio/bdev_aio.o 00:04:33.377 SYMLINK libspdk_bdev_null.so 00:04:33.377 CC module/bdev/aio/bdev_aio_rpc.o 00:04:33.377 LIB libspdk_bdev_passthru.a 00:04:33.377 CC module/bdev/split/vbdev_split_rpc.o 00:04:33.377 CC module/bdev/ftl/bdev_ftl.o 00:04:33.377 SO libspdk_bdev_passthru.so.6.0 00:04:33.635 CC module/bdev/raid/bdev_raid_rpc.o 00:04:33.635 SYMLINK libspdk_bdev_passthru.so 00:04:33.635 CC module/bdev/raid/bdev_raid_sb.o 00:04:33.635 LIB libspdk_bdev_split.a 00:04:33.635 SO libspdk_bdev_split.so.6.0 00:04:33.635 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:33.635 LIB libspdk_bdev_aio.a 00:04:33.635 CC module/bdev/uring/bdev_uring_rpc.o 00:04:33.635 SO libspdk_bdev_aio.so.6.0 00:04:33.635 SYMLINK libspdk_bdev_split.so 00:04:33.635 CC module/bdev/raid/raid0.o 00:04:33.635 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:33.635 SYMLINK libspdk_bdev_aio.so 00:04:33.635 CC module/bdev/nvme/nvme_rpc.o 00:04:33.635 CC module/bdev/raid/raid1.o 00:04:33.894 CC module/bdev/raid/concat.o 00:04:33.894 LIB libspdk_bdev_zone_block.a 00:04:33.894 LIB libspdk_bdev_uring.a 00:04:33.894 SO libspdk_bdev_zone_block.so.6.0 00:04:33.894 SO libspdk_bdev_uring.so.6.0 00:04:33.894 CC module/bdev/iscsi/bdev_iscsi.o 00:04:33.894 SYMLINK libspdk_bdev_zone_block.so 00:04:33.894 SYMLINK libspdk_bdev_uring.so 00:04:33.894 CC module/bdev/nvme/bdev_mdns_client.o 00:04:33.894 CC module/bdev/nvme/vbdev_opal.o 00:04:33.894 LIB libspdk_bdev_ftl.a 00:04:33.894 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:34.152 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:34.152 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:34.152 SO libspdk_bdev_ftl.so.6.0 00:04:34.152 LIB libspdk_bdev_raid.a 00:04:34.152 SYMLINK libspdk_bdev_ftl.so 00:04:34.152 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.152 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.152 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:34.152 SO libspdk_bdev_raid.so.6.0 00:04:34.152 LIB libspdk_bdev_iscsi.a 00:04:34.410 SYMLINK libspdk_bdev_raid.so 00:04:34.410 SO libspdk_bdev_iscsi.so.6.0 00:04:34.410 SYMLINK libspdk_bdev_iscsi.so 00:04:34.669 LIB libspdk_bdev_virtio.a 00:04:34.669 SO libspdk_bdev_virtio.so.6.0 00:04:34.669 SYMLINK libspdk_bdev_virtio.so 00:04:35.258 LIB libspdk_bdev_nvme.a 00:04:35.258 SO libspdk_bdev_nvme.so.7.0 00:04:35.517 SYMLINK libspdk_bdev_nvme.so 00:04:35.774 CC module/event/subsystems/sock/sock.o 00:04:35.774 CC module/event/subsystems/scheduler/scheduler.o 00:04:35.774 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:35.774 CC module/event/subsystems/vmd/vmd.o 00:04:35.774 CC module/event/subsystems/iobuf/iobuf.o 00:04:35.774 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:35.774 CC module/event/subsystems/keyring/keyring.o 00:04:35.774 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:36.032 LIB libspdk_event_vhost_blk.a 00:04:36.032 LIB libspdk_event_keyring.a 00:04:36.032 LIB libspdk_event_sock.a 00:04:36.032 LIB libspdk_event_scheduler.a 00:04:36.032 LIB libspdk_event_vmd.a 00:04:36.032 SO libspdk_event_vhost_blk.so.3.0 00:04:36.032 LIB libspdk_event_iobuf.a 00:04:36.032 SO libspdk_event_scheduler.so.4.0 00:04:36.032 SO libspdk_event_sock.so.5.0 00:04:36.032 SO libspdk_event_keyring.so.1.0 00:04:36.032 SO libspdk_event_vmd.so.6.0 00:04:36.032 SO libspdk_event_iobuf.so.3.0 00:04:36.032 SYMLINK libspdk_event_vhost_blk.so 00:04:36.032 SYMLINK libspdk_event_sock.so 00:04:36.032 SYMLINK libspdk_event_scheduler.so 00:04:36.032 SYMLINK libspdk_event_keyring.so 00:04:36.290 SYMLINK libspdk_event_vmd.so 00:04:36.290 SYMLINK libspdk_event_iobuf.so 00:04:36.567 CC module/event/subsystems/accel/accel.o 00:04:36.567 LIB libspdk_event_accel.a 00:04:36.567 SO libspdk_event_accel.so.6.0 00:04:36.567 SYMLINK libspdk_event_accel.so 00:04:36.834 CC module/event/subsystems/bdev/bdev.o 00:04:37.091 LIB libspdk_event_bdev.a 00:04:37.091 SO libspdk_event_bdev.so.6.0 00:04:37.349 SYMLINK libspdk_event_bdev.so 00:04:37.349 CC module/event/subsystems/scsi/scsi.o 00:04:37.607 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:37.607 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:37.607 CC module/event/subsystems/ublk/ublk.o 00:04:37.607 CC module/event/subsystems/nbd/nbd.o 00:04:37.607 LIB libspdk_event_ublk.a 00:04:37.607 LIB libspdk_event_nbd.a 00:04:37.607 LIB libspdk_event_scsi.a 00:04:37.607 SO libspdk_event_scsi.so.6.0 00:04:37.607 SO libspdk_event_nbd.so.6.0 00:04:37.607 SO libspdk_event_ublk.so.3.0 00:04:37.865 SYMLINK libspdk_event_nbd.so 00:04:37.865 SYMLINK libspdk_event_ublk.so 00:04:37.865 SYMLINK libspdk_event_scsi.so 00:04:37.865 LIB libspdk_event_nvmf.a 00:04:37.865 SO libspdk_event_nvmf.so.6.0 00:04:37.865 SYMLINK libspdk_event_nvmf.so 00:04:38.124 CC module/event/subsystems/iscsi/iscsi.o 00:04:38.124 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:38.124 LIB libspdk_event_vhost_scsi.a 00:04:38.124 LIB libspdk_event_iscsi.a 00:04:38.124 SO libspdk_event_vhost_scsi.so.3.0 00:04:38.382 SO libspdk_event_iscsi.so.6.0 00:04:38.382 SYMLINK libspdk_event_vhost_scsi.so 00:04:38.382 SYMLINK libspdk_event_iscsi.so 00:04:38.382 SO libspdk.so.6.0 00:04:38.382 SYMLINK libspdk.so 00:04:38.640 TEST_HEADER include/spdk/accel.h 00:04:38.640 CC test/rpc_client/rpc_client_test.o 00:04:38.640 CXX app/trace/trace.o 00:04:38.640 TEST_HEADER include/spdk/accel_module.h 00:04:38.640 TEST_HEADER include/spdk/assert.h 00:04:38.640 TEST_HEADER include/spdk/barrier.h 00:04:38.640 TEST_HEADER include/spdk/base64.h 00:04:38.640 TEST_HEADER include/spdk/bdev.h 00:04:38.640 TEST_HEADER include/spdk/bdev_module.h 00:04:38.640 TEST_HEADER include/spdk/bdev_zone.h 00:04:38.921 TEST_HEADER include/spdk/bit_array.h 00:04:38.921 TEST_HEADER include/spdk/bit_pool.h 00:04:38.921 TEST_HEADER include/spdk/blob_bdev.h 00:04:38.921 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:38.921 TEST_HEADER include/spdk/blobfs.h 00:04:38.921 TEST_HEADER include/spdk/blob.h 00:04:38.921 TEST_HEADER include/spdk/conf.h 00:04:38.921 TEST_HEADER include/spdk/config.h 00:04:38.921 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:38.921 TEST_HEADER include/spdk/cpuset.h 00:04:38.921 TEST_HEADER include/spdk/crc16.h 00:04:38.921 TEST_HEADER include/spdk/crc32.h 00:04:38.921 TEST_HEADER include/spdk/crc64.h 00:04:38.921 TEST_HEADER include/spdk/dif.h 00:04:38.921 TEST_HEADER include/spdk/dma.h 00:04:38.921 TEST_HEADER include/spdk/endian.h 00:04:38.921 TEST_HEADER include/spdk/env_dpdk.h 00:04:38.921 TEST_HEADER include/spdk/env.h 00:04:38.921 TEST_HEADER include/spdk/event.h 00:04:38.921 TEST_HEADER include/spdk/fd_group.h 00:04:38.921 TEST_HEADER include/spdk/fd.h 00:04:38.921 TEST_HEADER include/spdk/file.h 00:04:38.921 TEST_HEADER include/spdk/ftl.h 00:04:38.921 TEST_HEADER include/spdk/gpt_spec.h 00:04:38.921 TEST_HEADER include/spdk/hexlify.h 00:04:38.921 TEST_HEADER include/spdk/histogram_data.h 00:04:38.921 TEST_HEADER include/spdk/idxd.h 00:04:38.921 TEST_HEADER include/spdk/idxd_spec.h 00:04:38.921 TEST_HEADER include/spdk/init.h 00:04:38.921 CC test/thread/poller_perf/poller_perf.o 00:04:38.921 TEST_HEADER include/spdk/ioat.h 00:04:38.921 TEST_HEADER include/spdk/ioat_spec.h 00:04:38.921 CC examples/ioat/perf/perf.o 00:04:38.921 TEST_HEADER include/spdk/iscsi_spec.h 00:04:38.921 CC examples/util/zipf/zipf.o 00:04:38.921 TEST_HEADER include/spdk/json.h 00:04:38.921 TEST_HEADER include/spdk/jsonrpc.h 00:04:38.921 TEST_HEADER include/spdk/keyring.h 00:04:38.921 TEST_HEADER include/spdk/keyring_module.h 00:04:38.921 TEST_HEADER include/spdk/likely.h 00:04:38.921 TEST_HEADER include/spdk/log.h 00:04:38.921 TEST_HEADER include/spdk/lvol.h 00:04:38.921 TEST_HEADER include/spdk/memory.h 00:04:38.921 TEST_HEADER include/spdk/mmio.h 00:04:38.921 TEST_HEADER include/spdk/nbd.h 00:04:38.921 TEST_HEADER include/spdk/net.h 00:04:38.921 TEST_HEADER include/spdk/notify.h 00:04:38.921 TEST_HEADER include/spdk/nvme.h 00:04:38.921 TEST_HEADER include/spdk/nvme_intel.h 00:04:38.921 CC test/dma/test_dma/test_dma.o 00:04:38.921 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:38.921 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:38.921 TEST_HEADER include/spdk/nvme_spec.h 00:04:38.921 TEST_HEADER include/spdk/nvme_zns.h 00:04:38.921 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:38.921 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:38.921 TEST_HEADER include/spdk/nvmf.h 00:04:38.921 TEST_HEADER include/spdk/nvmf_spec.h 00:04:38.921 TEST_HEADER include/spdk/nvmf_transport.h 00:04:38.921 CC test/app/bdev_svc/bdev_svc.o 00:04:38.921 TEST_HEADER include/spdk/opal.h 00:04:38.921 TEST_HEADER include/spdk/opal_spec.h 00:04:38.921 TEST_HEADER include/spdk/pci_ids.h 00:04:38.921 TEST_HEADER include/spdk/pipe.h 00:04:38.921 TEST_HEADER include/spdk/queue.h 00:04:38.921 TEST_HEADER include/spdk/reduce.h 00:04:38.921 TEST_HEADER include/spdk/rpc.h 00:04:38.921 TEST_HEADER include/spdk/scheduler.h 00:04:38.921 TEST_HEADER include/spdk/scsi.h 00:04:38.921 TEST_HEADER include/spdk/scsi_spec.h 00:04:38.921 TEST_HEADER include/spdk/sock.h 00:04:38.921 TEST_HEADER include/spdk/stdinc.h 00:04:38.921 TEST_HEADER include/spdk/string.h 00:04:38.921 TEST_HEADER include/spdk/thread.h 00:04:38.921 TEST_HEADER include/spdk/trace.h 00:04:38.921 TEST_HEADER include/spdk/trace_parser.h 00:04:38.921 TEST_HEADER include/spdk/tree.h 00:04:38.921 TEST_HEADER include/spdk/ublk.h 00:04:38.921 CC test/env/mem_callbacks/mem_callbacks.o 00:04:38.921 TEST_HEADER include/spdk/util.h 00:04:38.921 TEST_HEADER include/spdk/uuid.h 00:04:38.921 TEST_HEADER include/spdk/version.h 00:04:38.921 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:38.921 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:38.921 TEST_HEADER include/spdk/vhost.h 00:04:38.921 TEST_HEADER include/spdk/vmd.h 00:04:38.921 TEST_HEADER include/spdk/xor.h 00:04:38.921 LINK rpc_client_test 00:04:38.921 TEST_HEADER include/spdk/zipf.h 00:04:38.921 CXX test/cpp_headers/accel.o 00:04:39.191 LINK interrupt_tgt 00:04:39.191 LINK poller_perf 00:04:39.191 LINK zipf 00:04:39.191 LINK ioat_perf 00:04:39.191 CXX test/cpp_headers/accel_module.o 00:04:39.191 LINK bdev_svc 00:04:39.191 CXX test/cpp_headers/assert.o 00:04:39.191 LINK spdk_trace 00:04:39.191 CC app/trace_record/trace_record.o 00:04:39.449 LINK test_dma 00:04:39.449 CC test/env/vtophys/vtophys.o 00:04:39.449 CXX test/cpp_headers/barrier.o 00:04:39.449 CC examples/ioat/verify/verify.o 00:04:39.449 CC app/nvmf_tgt/nvmf_main.o 00:04:39.449 CXX test/cpp_headers/base64.o 00:04:39.449 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:39.449 LINK vtophys 00:04:39.450 LINK spdk_trace_record 00:04:39.708 LINK nvmf_tgt 00:04:39.708 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:39.708 LINK mem_callbacks 00:04:39.708 CXX test/cpp_headers/bdev.o 00:04:39.708 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:39.708 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:39.708 LINK verify 00:04:39.708 CXX test/cpp_headers/bdev_module.o 00:04:39.708 LINK env_dpdk_post_init 00:04:39.708 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:39.708 CC app/iscsi_tgt/iscsi_tgt.o 00:04:39.966 CXX test/cpp_headers/bdev_zone.o 00:04:39.966 CC test/env/memory/memory_ut.o 00:04:39.966 CC test/event/event_perf/event_perf.o 00:04:39.966 LINK nvme_fuzz 00:04:39.966 CC test/nvme/aer/aer.o 00:04:39.966 CC examples/thread/thread/thread_ex.o 00:04:39.966 LINK iscsi_tgt 00:04:39.966 CC test/accel/dif/dif.o 00:04:39.966 CXX test/cpp_headers/bit_array.o 00:04:40.223 LINK vhost_fuzz 00:04:40.223 LINK event_perf 00:04:40.223 CXX test/cpp_headers/bit_pool.o 00:04:40.223 LINK thread 00:04:40.223 LINK aer 00:04:40.223 CXX test/cpp_headers/blob_bdev.o 00:04:40.481 CC test/nvme/reset/reset.o 00:04:40.481 CC test/event/reactor/reactor.o 00:04:40.481 CC app/spdk_tgt/spdk_tgt.o 00:04:40.481 CC test/blobfs/mkfs/mkfs.o 00:04:40.481 CXX test/cpp_headers/blobfs_bdev.o 00:04:40.481 LINK dif 00:04:40.481 LINK reactor 00:04:40.739 CC examples/sock/hello_world/hello_sock.o 00:04:40.739 LINK reset 00:04:40.739 CXX test/cpp_headers/blobfs.o 00:04:40.739 LINK spdk_tgt 00:04:40.739 LINK mkfs 00:04:40.739 CC examples/vmd/lsvmd/lsvmd.o 00:04:40.739 CXX test/cpp_headers/blob.o 00:04:40.739 CC test/event/reactor_perf/reactor_perf.o 00:04:40.739 LINK lsvmd 00:04:40.997 CXX test/cpp_headers/conf.o 00:04:40.997 LINK hello_sock 00:04:40.997 CC test/nvme/sgl/sgl.o 00:04:40.997 CC test/nvme/e2edp/nvme_dp.o 00:04:40.997 LINK reactor_perf 00:04:40.997 CC test/nvme/overhead/overhead.o 00:04:40.997 CC app/spdk_lspci/spdk_lspci.o 00:04:40.997 LINK memory_ut 00:04:40.997 CXX test/cpp_headers/config.o 00:04:40.997 CXX test/cpp_headers/cpuset.o 00:04:40.997 CC examples/vmd/led/led.o 00:04:41.254 LINK spdk_lspci 00:04:41.254 CC test/event/app_repeat/app_repeat.o 00:04:41.254 LINK nvme_dp 00:04:41.254 LINK sgl 00:04:41.254 LINK iscsi_fuzz 00:04:41.254 CXX test/cpp_headers/crc16.o 00:04:41.254 CC examples/idxd/perf/perf.o 00:04:41.254 LINK overhead 00:04:41.254 LINK led 00:04:41.254 CC test/env/pci/pci_ut.o 00:04:41.254 CC app/spdk_nvme_perf/perf.o 00:04:41.254 LINK app_repeat 00:04:41.513 CXX test/cpp_headers/crc32.o 00:04:41.513 CXX test/cpp_headers/crc64.o 00:04:41.513 CXX test/cpp_headers/dif.o 00:04:41.513 CC test/nvme/err_injection/err_injection.o 00:04:41.513 LINK idxd_perf 00:04:41.513 CC test/app/histogram_perf/histogram_perf.o 00:04:41.513 CXX test/cpp_headers/dma.o 00:04:41.771 CC test/lvol/esnap/esnap.o 00:04:41.771 CC test/event/scheduler/scheduler.o 00:04:41.771 LINK err_injection 00:04:41.771 CC test/app/jsoncat/jsoncat.o 00:04:41.771 CC test/nvme/startup/startup.o 00:04:41.771 LINK pci_ut 00:04:41.771 LINK histogram_perf 00:04:41.771 CXX test/cpp_headers/endian.o 00:04:41.771 LINK jsoncat 00:04:41.771 CXX test/cpp_headers/env_dpdk.o 00:04:41.771 LINK startup 00:04:42.029 CC examples/accel/perf/accel_perf.o 00:04:42.029 LINK scheduler 00:04:42.029 CXX test/cpp_headers/env.o 00:04:42.029 CXX test/cpp_headers/event.o 00:04:42.029 CC test/app/stub/stub.o 00:04:42.029 CXX test/cpp_headers/fd_group.o 00:04:42.029 CXX test/cpp_headers/fd.o 00:04:42.029 CC test/nvme/reserve/reserve.o 00:04:42.287 CC examples/blob/hello_world/hello_blob.o 00:04:42.287 CC test/nvme/simple_copy/simple_copy.o 00:04:42.287 LINK spdk_nvme_perf 00:04:42.287 CC examples/nvme/hello_world/hello_world.o 00:04:42.287 LINK stub 00:04:42.287 CXX test/cpp_headers/file.o 00:04:42.287 CC examples/nvme/reconnect/reconnect.o 00:04:42.287 LINK reserve 00:04:42.287 LINK accel_perf 00:04:42.544 CXX test/cpp_headers/ftl.o 00:04:42.544 LINK hello_world 00:04:42.544 LINK simple_copy 00:04:42.544 LINK hello_blob 00:04:42.544 CC app/spdk_nvme_identify/identify.o 00:04:42.544 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:42.544 CC examples/nvme/arbitration/arbitration.o 00:04:42.544 CC examples/nvme/hotplug/hotplug.o 00:04:42.544 CXX test/cpp_headers/gpt_spec.o 00:04:42.801 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:42.801 CC test/nvme/connect_stress/connect_stress.o 00:04:42.801 LINK reconnect 00:04:42.801 CC examples/blob/cli/blobcli.o 00:04:42.801 CXX test/cpp_headers/hexlify.o 00:04:42.801 CXX test/cpp_headers/histogram_data.o 00:04:42.801 LINK hotplug 00:04:42.801 LINK cmb_copy 00:04:42.801 LINK connect_stress 00:04:43.058 LINK arbitration 00:04:43.058 CXX test/cpp_headers/idxd.o 00:04:43.058 LINK nvme_manage 00:04:43.058 CC examples/nvme/abort/abort.o 00:04:43.058 CC app/spdk_nvme_discover/discovery_aer.o 00:04:43.058 CC test/nvme/boot_partition/boot_partition.o 00:04:43.316 CC app/spdk_top/spdk_top.o 00:04:43.316 CXX test/cpp_headers/idxd_spec.o 00:04:43.316 CC test/bdev/bdevio/bdevio.o 00:04:43.316 LINK spdk_nvme_identify 00:04:43.316 LINK blobcli 00:04:43.316 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:43.316 LINK boot_partition 00:04:43.316 LINK spdk_nvme_discover 00:04:43.316 CXX test/cpp_headers/init.o 00:04:43.574 LINK pmr_persistence 00:04:43.574 LINK abort 00:04:43.574 CXX test/cpp_headers/ioat.o 00:04:43.574 CC app/vhost/vhost.o 00:04:43.574 CC test/nvme/compliance/nvme_compliance.o 00:04:43.574 CC app/spdk_dd/spdk_dd.o 00:04:43.574 LINK bdevio 00:04:43.832 CC app/fio/nvme/fio_plugin.o 00:04:43.832 CXX test/cpp_headers/ioat_spec.o 00:04:43.832 LINK vhost 00:04:43.832 CC app/fio/bdev/fio_plugin.o 00:04:43.832 CC examples/bdev/hello_world/hello_bdev.o 00:04:43.832 CXX test/cpp_headers/iscsi_spec.o 00:04:43.832 LINK nvme_compliance 00:04:44.090 CC examples/bdev/bdevperf/bdevperf.o 00:04:44.090 CXX test/cpp_headers/json.o 00:04:44.090 LINK spdk_top 00:04:44.090 LINK spdk_dd 00:04:44.090 LINK hello_bdev 00:04:44.090 CC test/nvme/fused_ordering/fused_ordering.o 00:04:44.090 CXX test/cpp_headers/jsonrpc.o 00:04:44.090 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:44.347 CXX test/cpp_headers/keyring.o 00:04:44.347 LINK spdk_nvme 00:04:44.347 LINK spdk_bdev 00:04:44.347 CXX test/cpp_headers/keyring_module.o 00:04:44.347 CXX test/cpp_headers/likely.o 00:04:44.347 LINK fused_ordering 00:04:44.347 LINK doorbell_aers 00:04:44.347 CXX test/cpp_headers/log.o 00:04:44.347 CC test/nvme/fdp/fdp.o 00:04:44.347 CXX test/cpp_headers/lvol.o 00:04:44.347 CC test/nvme/cuse/cuse.o 00:04:44.605 CXX test/cpp_headers/memory.o 00:04:44.605 CXX test/cpp_headers/mmio.o 00:04:44.605 CXX test/cpp_headers/nbd.o 00:04:44.605 CXX test/cpp_headers/net.o 00:04:44.605 CXX test/cpp_headers/notify.o 00:04:44.605 CXX test/cpp_headers/nvme.o 00:04:44.605 CXX test/cpp_headers/nvme_intel.o 00:04:44.605 CXX test/cpp_headers/nvme_ocssd.o 00:04:44.605 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:44.605 CXX test/cpp_headers/nvme_spec.o 00:04:44.864 LINK bdevperf 00:04:44.864 CXX test/cpp_headers/nvme_zns.o 00:04:44.864 CXX test/cpp_headers/nvmf_cmd.o 00:04:44.864 LINK fdp 00:04:44.864 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:44.864 CXX test/cpp_headers/nvmf.o 00:04:44.864 CXX test/cpp_headers/nvmf_spec.o 00:04:44.864 CXX test/cpp_headers/nvmf_transport.o 00:04:44.864 CXX test/cpp_headers/opal.o 00:04:44.864 CXX test/cpp_headers/opal_spec.o 00:04:44.864 CXX test/cpp_headers/pci_ids.o 00:04:45.122 CXX test/cpp_headers/pipe.o 00:04:45.122 CXX test/cpp_headers/queue.o 00:04:45.122 CXX test/cpp_headers/reduce.o 00:04:45.122 CXX test/cpp_headers/rpc.o 00:04:45.122 CXX test/cpp_headers/scheduler.o 00:04:45.122 CXX test/cpp_headers/scsi.o 00:04:45.122 CXX test/cpp_headers/scsi_spec.o 00:04:45.122 CXX test/cpp_headers/sock.o 00:04:45.122 CXX test/cpp_headers/stdinc.o 00:04:45.122 CC examples/nvmf/nvmf/nvmf.o 00:04:45.122 CXX test/cpp_headers/string.o 00:04:45.380 CXX test/cpp_headers/thread.o 00:04:45.380 CXX test/cpp_headers/trace.o 00:04:45.380 CXX test/cpp_headers/trace_parser.o 00:04:45.380 CXX test/cpp_headers/tree.o 00:04:45.380 CXX test/cpp_headers/ublk.o 00:04:45.380 CXX test/cpp_headers/util.o 00:04:45.380 CXX test/cpp_headers/uuid.o 00:04:45.380 CXX test/cpp_headers/version.o 00:04:45.380 CXX test/cpp_headers/vfio_user_pci.o 00:04:45.380 CXX test/cpp_headers/vfio_user_spec.o 00:04:45.380 CXX test/cpp_headers/vhost.o 00:04:45.380 CXX test/cpp_headers/vmd.o 00:04:45.380 LINK nvmf 00:04:45.712 CXX test/cpp_headers/xor.o 00:04:45.713 CXX test/cpp_headers/zipf.o 00:04:45.713 LINK cuse 00:04:46.295 LINK esnap 00:04:46.861 00:04:46.861 real 0m54.464s 00:04:46.861 user 5m1.516s 00:04:46.861 sys 1m8.551s 00:04:46.861 04:00:39 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:46.861 04:00:39 make -- common/autotest_common.sh@10 -- $ set +x 00:04:46.861 ************************************ 00:04:46.861 END TEST make 00:04:46.861 ************************************ 00:04:46.861 04:00:40 -- common/autotest_common.sh@1142 -- $ return 0 00:04:46.861 04:00:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:46.861 04:00:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:46.861 04:00:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:46.861 04:00:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.861 04:00:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:46.861 04:00:40 -- pm/common@44 -- $ pid=6042 00:04:46.861 04:00:40 -- pm/common@50 -- $ kill -TERM 6042 00:04:46.861 04:00:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.861 04:00:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:46.861 04:00:40 -- pm/common@44 -- $ pid=6044 00:04:46.861 04:00:40 -- pm/common@50 -- $ kill -TERM 6044 00:04:46.861 04:00:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.861 04:00:40 -- nvmf/common.sh@7 -- # uname -s 00:04:46.861 04:00:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.861 04:00:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.861 04:00:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.861 04:00:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.861 04:00:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.861 04:00:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.861 04:00:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.861 04:00:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.861 04:00:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.861 04:00:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.861 04:00:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:04:46.861 04:00:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:04:46.861 04:00:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.861 04:00:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.861 04:00:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:46.861 04:00:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.861 04:00:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.861 04:00:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.861 04:00:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.861 04:00:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.861 04:00:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.861 04:00:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.861 04:00:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.861 04:00:40 -- paths/export.sh@5 -- # export PATH 00:04:46.861 04:00:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.861 04:00:40 -- nvmf/common.sh@47 -- # : 0 00:04:46.861 04:00:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:46.861 04:00:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:46.861 04:00:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.861 04:00:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.861 04:00:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.861 04:00:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:46.861 04:00:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:46.861 04:00:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:46.861 04:00:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:46.861 04:00:40 -- spdk/autotest.sh@32 -- # uname -s 00:04:46.861 04:00:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:46.861 04:00:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:46.861 04:00:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.861 04:00:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:46.861 04:00:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.861 04:00:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:46.861 04:00:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:46.861 04:00:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:46.861 04:00:40 -- spdk/autotest.sh@48 -- # udevadm_pid=66473 00:04:46.861 04:00:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:46.861 04:00:40 -- pm/common@17 -- # local monitor 00:04:46.861 04:00:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.861 04:00:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.861 04:00:40 -- pm/common@25 -- # sleep 1 00:04:46.861 04:00:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:46.861 04:00:40 -- pm/common@21 -- # date +%s 00:04:46.861 04:00:40 -- pm/common@21 -- # date +%s 00:04:46.861 04:00:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721707240 00:04:46.861 04:00:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721707240 00:04:47.120 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721707240_collect-vmstat.pm.log 00:04:47.120 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721707240_collect-cpu-load.pm.log 00:04:48.056 04:00:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:48.056 04:00:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:48.056 04:00:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:48.056 04:00:41 -- common/autotest_common.sh@10 -- # set +x 00:04:48.056 04:00:41 -- spdk/autotest.sh@59 -- # create_test_list 00:04:48.056 04:00:41 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:48.056 04:00:41 -- common/autotest_common.sh@10 -- # set +x 00:04:48.056 04:00:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:48.056 04:00:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:48.056 04:00:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:48.056 04:00:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:48.056 04:00:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:48.056 04:00:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:48.056 04:00:41 -- common/autotest_common.sh@1455 -- # uname 00:04:48.056 04:00:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:48.056 04:00:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:48.056 04:00:41 -- common/autotest_common.sh@1475 -- # uname 00:04:48.056 04:00:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:48.056 04:00:41 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:48.056 04:00:41 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:48.056 04:00:41 -- spdk/autotest.sh@72 -- # hash lcov 00:04:48.056 04:00:41 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:48.056 04:00:41 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:48.056 --rc lcov_branch_coverage=1 00:04:48.056 --rc lcov_function_coverage=1 00:04:48.056 --rc genhtml_branch_coverage=1 00:04:48.056 --rc genhtml_function_coverage=1 00:04:48.056 --rc genhtml_legend=1 00:04:48.056 --rc geninfo_all_blocks=1 00:04:48.056 ' 00:04:48.056 04:00:41 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:48.056 --rc lcov_branch_coverage=1 00:04:48.056 --rc lcov_function_coverage=1 00:04:48.056 --rc genhtml_branch_coverage=1 00:04:48.056 --rc genhtml_function_coverage=1 00:04:48.056 --rc genhtml_legend=1 00:04:48.056 --rc geninfo_all_blocks=1 00:04:48.056 ' 00:04:48.056 04:00:41 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:48.056 --rc lcov_branch_coverage=1 00:04:48.056 --rc lcov_function_coverage=1 00:04:48.056 --rc genhtml_branch_coverage=1 00:04:48.056 --rc genhtml_function_coverage=1 00:04:48.056 --rc genhtml_legend=1 00:04:48.056 --rc geninfo_all_blocks=1 00:04:48.056 --no-external' 00:04:48.056 04:00:41 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:48.056 --rc lcov_branch_coverage=1 00:04:48.056 --rc lcov_function_coverage=1 00:04:48.056 --rc genhtml_branch_coverage=1 00:04:48.056 --rc genhtml_function_coverage=1 00:04:48.056 --rc genhtml_legend=1 00:04:48.056 --rc geninfo_all_blocks=1 00:04:48.056 --no-external' 00:04:48.056 04:00:41 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:48.056 lcov: LCOV version 1.14 00:04:48.056 04:00:41 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:02.957 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:02.957 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:12.940 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:12.940 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:12.941 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:16.235 04:01:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:16.235 04:01:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.235 04:01:09 -- common/autotest_common.sh@10 -- # set +x 00:05:16.235 04:01:09 -- spdk/autotest.sh@91 -- # rm -f 00:05:16.235 04:01:09 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.801 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:16.801 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:16.801 04:01:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:16.801 04:01:10 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:16.801 04:01:10 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:16.801 04:01:10 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:16.801 04:01:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.801 04:01:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:16.801 04:01:10 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:16.801 04:01:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.801 04:01:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.801 04:01:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.801 04:01:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:16.801 04:01:10 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:16.801 04:01:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:16.801 04:01:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.801 04:01:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.801 04:01:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:16.801 04:01:10 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:16.801 04:01:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:16.801 04:01:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.801 04:01:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.801 04:01:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:16.801 04:01:10 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:16.801 04:01:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:16.801 04:01:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.801 04:01:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:16.801 04:01:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.801 04:01:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:16.801 04:01:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:16.801 04:01:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:16.801 04:01:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:17.058 No valid GPT data, bailing 00:05:17.058 04:01:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:17.058 04:01:10 -- scripts/common.sh@391 -- # pt= 00:05:17.058 04:01:10 -- scripts/common.sh@392 -- # return 1 00:05:17.058 04:01:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:17.058 1+0 records in 00:05:17.058 1+0 records out 00:05:17.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472491 s, 222 MB/s 00:05:17.058 04:01:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.058 04:01:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:17.058 04:01:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:17.058 04:01:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:17.058 04:01:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:17.058 No valid GPT data, bailing 00:05:17.058 04:01:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:17.058 04:01:10 -- scripts/common.sh@391 -- # pt= 00:05:17.058 04:01:10 -- scripts/common.sh@392 -- # return 1 00:05:17.058 04:01:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:17.058 1+0 records in 00:05:17.058 1+0 records out 00:05:17.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488819 s, 215 MB/s 00:05:17.058 04:01:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.058 04:01:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:17.058 04:01:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:17.058 04:01:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:17.058 04:01:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:17.058 No valid GPT data, bailing 00:05:17.058 04:01:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:17.058 04:01:10 -- scripts/common.sh@391 -- # pt= 00:05:17.058 04:01:10 -- scripts/common.sh@392 -- # return 1 00:05:17.058 04:01:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:17.058 1+0 records in 00:05:17.058 1+0 records out 00:05:17.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412058 s, 254 MB/s 00:05:17.058 04:01:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.058 04:01:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:17.058 04:01:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:17.059 04:01:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:17.059 04:01:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:17.317 No valid GPT data, bailing 00:05:17.317 04:01:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:17.317 04:01:10 -- scripts/common.sh@391 -- # pt= 00:05:17.317 04:01:10 -- scripts/common.sh@392 -- # return 1 00:05:17.317 04:01:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:17.317 1+0 records in 00:05:17.317 1+0 records out 00:05:17.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416872 s, 252 MB/s 00:05:17.317 04:01:10 -- spdk/autotest.sh@118 -- # sync 00:05:17.596 04:01:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:17.596 04:01:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:17.596 04:01:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:19.581 04:01:12 -- spdk/autotest.sh@124 -- # uname -s 00:05:19.581 04:01:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:19.581 04:01:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:19.581 04:01:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.581 04:01:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.581 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:19.581 ************************************ 00:05:19.581 START TEST setup.sh 00:05:19.581 ************************************ 00:05:19.581 04:01:12 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:19.581 * Looking for test storage... 00:05:19.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.581 04:01:12 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:19.581 04:01:12 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:19.581 04:01:12 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:19.581 04:01:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.581 04:01:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.581 04:01:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.581 ************************************ 00:05:19.581 START TEST acl 00:05:19.581 ************************************ 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:19.581 * Looking for test storage... 00:05:19.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.581 04:01:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:19.581 04:01:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.581 04:01:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:19.581 04:01:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:19.581 04:01:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:19.581 04:01:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:19.581 04:01:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:19.581 04:01:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.581 04:01:12 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.514 04:01:13 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:20.514 04:01:13 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:20.514 04:01:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.514 04:01:13 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:20.514 04:01:13 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.514 04:01:13 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.101 Hugepages 00:05:21.101 node hugesize free / total 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.101 00:05:21.101 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:21.101 04:01:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.360 04:01:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:21.360 04:01:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:21.360 04:01:14 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:21.360 04:01:14 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:21.360 04:01:14 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:21.360 04:01:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:21.360 04:01:14 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:21.360 04:01:14 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:21.360 04:01:14 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.360 04:01:14 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.360 04:01:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:21.360 ************************************ 00:05:21.360 START TEST denied 00:05:21.360 ************************************ 00:05:21.360 04:01:14 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:21.360 04:01:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:21.360 04:01:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:21.360 04:01:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.360 04:01:14 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:21.360 04:01:14 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.296 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.296 04:01:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.861 00:05:22.861 real 0m1.445s 00:05:22.861 user 0m0.578s 00:05:22.861 sys 0m0.817s 00:05:22.861 04:01:15 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.861 ************************************ 00:05:22.861 04:01:15 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:22.861 END TEST denied 00:05:22.861 ************************************ 00:05:22.861 04:01:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:22.861 04:01:15 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:22.861 04:01:15 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.861 04:01:15 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.861 04:01:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:22.861 ************************************ 00:05:22.861 START TEST allowed 00:05:22.861 ************************************ 00:05:22.861 04:01:15 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:22.861 04:01:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:22.861 04:01:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:22.861 04:01:15 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:22.861 04:01:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.861 04:01:15 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.429 04:01:16 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:23.429 04:01:16 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:23.429 04:01:16 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:23.429 04:01:16 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:23.429 04:01:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:23.687 04:01:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:23.687 04:01:16 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:23.687 04:01:16 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:23.687 04:01:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.687 04:01:16 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.254 00:05:24.254 real 0m1.559s 00:05:24.254 user 0m0.697s 00:05:24.254 sys 0m0.836s 00:05:24.254 04:01:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.254 04:01:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:24.254 ************************************ 00:05:24.254 END TEST allowed 00:05:24.254 ************************************ 00:05:24.254 04:01:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:24.254 ************************************ 00:05:24.254 END TEST acl 00:05:24.254 ************************************ 00:05:24.254 00:05:24.254 real 0m4.810s 00:05:24.254 user 0m2.102s 00:05:24.254 sys 0m2.638s 00:05:24.254 04:01:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.254 04:01:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:24.254 04:01:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:24.254 04:01:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:24.254 04:01:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.254 04:01:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.254 04:01:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:24.514 ************************************ 00:05:24.514 START TEST hugepages 00:05:24.514 ************************************ 00:05:24.514 04:01:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:24.514 * Looking for test storage... 00:05:24.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4593892 kB' 'MemAvailable: 7381556 kB' 'Buffers: 2436 kB' 'Cached: 2991308 kB' 'SwapCached: 0 kB' 'Active: 436232 kB' 'Inactive: 2662352 kB' 'Active(anon): 115332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662352 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 106800 kB' 'Mapped: 48588 kB' 'Shmem: 10492 kB' 'KReclaimable: 82700 kB' 'Slab: 161876 kB' 'SReclaimable: 82700 kB' 'SUnreclaim: 79176 kB' 'KernelStack: 6524 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 345808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.514 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.515 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:24.516 04:01:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:24.516 04:01:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.516 04:01:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.516 04:01:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.516 ************************************ 00:05:24.516 START TEST default_setup 00:05:24.516 ************************************ 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:24.516 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:24.517 04:01:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:24.517 04:01:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.517 04:01:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.340 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.340 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6636384 kB' 'MemAvailable: 9423920 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453400 kB' 'Inactive: 2662360 kB' 'Active(anon): 132500 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123624 kB' 'Mapped: 48724 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161524 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79092 kB' 'KernelStack: 6512 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.340 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.341 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6644444 kB' 'MemAvailable: 9431980 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453016 kB' 'Inactive: 2662360 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123292 kB' 'Mapped: 48600 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161536 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79104 kB' 'KernelStack: 6512 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.342 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.343 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6643976 kB' 'MemAvailable: 9431512 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 452928 kB' 'Inactive: 2662360 kB' 'Active(anon): 132028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123172 kB' 'Mapped: 48600 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161524 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79092 kB' 'KernelStack: 6496 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.344 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.345 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:25.606 nr_hugepages=1024 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.606 resv_hugepages=0 00:05:25.606 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.606 surplus_hugepages=0 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.607 anon_hugepages=0 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6643976 kB' 'MemAvailable: 9431512 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 452924 kB' 'Inactive: 2662360 kB' 'Active(anon): 132024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123164 kB' 'Mapped: 48600 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161524 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79092 kB' 'KernelStack: 6496 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.607 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:25.608 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6644448 kB' 'MemUsed: 5597524 kB' 'SwapCached: 0 kB' 'Active: 452732 kB' 'Inactive: 2662360 kB' 'Active(anon): 131832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 2993732 kB' 'Mapped: 48600 kB' 'AnonPages: 122976 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82432 kB' 'Slab: 161524 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.609 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.610 node0=1024 expecting 1024 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:25.610 00:05:25.610 real 0m0.966s 00:05:25.610 user 0m0.454s 00:05:25.610 sys 0m0.460s 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.610 04:01:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:25.610 ************************************ 00:05:25.610 END TEST default_setup 00:05:25.610 ************************************ 00:05:25.610 04:01:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:25.610 04:01:18 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:25.610 04:01:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.610 04:01:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.610 04:01:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.610 ************************************ 00:05:25.610 START TEST per_node_1G_alloc 00:05:25.610 ************************************ 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.610 04:01:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.870 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.870 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693284 kB' 'MemAvailable: 10480824 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453332 kB' 'Inactive: 2662364 kB' 'Active(anon): 132432 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123592 kB' 'Mapped: 48696 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161540 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79108 kB' 'KernelStack: 6520 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:25.870 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.871 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693288 kB' 'MemAvailable: 10480828 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 452820 kB' 'Inactive: 2662364 kB' 'Active(anon): 131920 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123044 kB' 'Mapped: 48600 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161548 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79116 kB' 'KernelStack: 6496 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:25.872 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.138 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.139 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693288 kB' 'MemAvailable: 10480828 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453080 kB' 'Inactive: 2662364 kB' 'Active(anon): 132180 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123304 kB' 'Mapped: 48600 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161548 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79116 kB' 'KernelStack: 6496 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.140 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.141 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:26.142 nr_hugepages=512 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:26.142 resv_hugepages=0 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.142 surplus_hugepages=0 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.142 anon_hugepages=0 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693288 kB' 'MemAvailable: 10480828 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453068 kB' 'Inactive: 2662364 kB' 'Active(anon): 132168 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123300 kB' 'Mapped: 48600 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161548 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79116 kB' 'KernelStack: 6512 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.142 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.143 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693288 kB' 'MemUsed: 4548684 kB' 'SwapCached: 0 kB' 'Active: 453040 kB' 'Inactive: 2662364 kB' 'Active(anon): 132140 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 2993732 kB' 'Mapped: 48600 kB' 'AnonPages: 123300 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82432 kB' 'Slab: 161548 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.144 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.145 node0=512 expecting 512 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:26.145 00:05:26.145 real 0m0.528s 00:05:26.145 user 0m0.268s 00:05:26.145 sys 0m0.297s 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.145 04:01:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:26.145 ************************************ 00:05:26.145 END TEST per_node_1G_alloc 00:05:26.145 ************************************ 00:05:26.145 04:01:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:26.145 04:01:19 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:26.145 04:01:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.145 04:01:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.145 04:01:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:26.145 ************************************ 00:05:26.145 START TEST even_2G_alloc 00:05:26.145 ************************************ 00:05:26.145 04:01:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:26.145 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:26.145 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:26.145 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:26.145 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.146 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.457 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.457 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.457 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6661500 kB' 'MemAvailable: 9449040 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453444 kB' 'Inactive: 2662364 kB' 'Active(anon): 132544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123688 kB' 'Mapped: 48860 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161556 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79124 kB' 'KernelStack: 6516 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.458 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6661248 kB' 'MemAvailable: 9448788 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453224 kB' 'Inactive: 2662364 kB' 'Active(anon): 132324 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161544 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79112 kB' 'KernelStack: 6496 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.459 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.460 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.722 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6661248 kB' 'MemAvailable: 9448788 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 452844 kB' 'Inactive: 2662364 kB' 'Active(anon): 131944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161536 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79104 kB' 'KernelStack: 6512 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.723 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.724 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.725 nr_hugepages=1024 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:26.725 resv_hugepages=0 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.725 surplus_hugepages=0 00:05:26.725 anon_hugepages=0 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6661248 kB' 'MemAvailable: 9448788 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 452996 kB' 'Inactive: 2662364 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161536 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79104 kB' 'KernelStack: 6496 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.725 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.726 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6661604 kB' 'MemUsed: 5580368 kB' 'SwapCached: 0 kB' 'Active: 453000 kB' 'Inactive: 2662364 kB' 'Active(anon): 132100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2993732 kB' 'Mapped: 48604 kB' 'AnonPages: 123200 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82432 kB' 'Slab: 161528 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.727 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.728 node0=1024 expecting 1024 00:05:26.728 ************************************ 00:05:26.728 END TEST even_2G_alloc 00:05:26.728 ************************************ 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:26.728 00:05:26.728 real 0m0.543s 00:05:26.728 user 0m0.266s 00:05:26.728 sys 0m0.293s 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.728 04:01:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:26.728 04:01:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:26.728 04:01:19 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:26.728 04:01:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.728 04:01:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.728 04:01:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:26.728 ************************************ 00:05:26.728 START TEST odd_alloc 00:05:26.728 ************************************ 00:05:26.728 04:01:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:26.728 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:26.728 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:26.728 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.729 04:01:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.986 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.986 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6657420 kB' 'MemAvailable: 9444960 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453464 kB' 'Inactive: 2662364 kB' 'Active(anon): 132564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123720 kB' 'Mapped: 48716 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161472 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79040 kB' 'KernelStack: 6552 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.248 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.249 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6657168 kB' 'MemAvailable: 9444708 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453048 kB' 'Inactive: 2662364 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123268 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161480 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79048 kB' 'KernelStack: 6528 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.250 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.251 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6657168 kB' 'MemAvailable: 9444708 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453036 kB' 'Inactive: 2662364 kB' 'Active(anon): 132136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123236 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161480 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79048 kB' 'KernelStack: 6496 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.252 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.253 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.254 nr_hugepages=1025 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:27.254 resv_hugepages=0 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.254 surplus_hugepages=0 00:05:27.254 anon_hugepages=0 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6656916 kB' 'MemAvailable: 9444456 kB' 'Buffers: 2436 kB' 'Cached: 2991296 kB' 'SwapCached: 0 kB' 'Active: 453096 kB' 'Inactive: 2662364 kB' 'Active(anon): 132196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662364 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123356 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161476 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79044 kB' 'KernelStack: 6448 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.254 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.255 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6656916 kB' 'MemUsed: 5585056 kB' 'SwapCached: 0 kB' 'Active: 452984 kB' 'Inactive: 2662368 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2993736 kB' 'Mapped: 48604 kB' 'AnonPages: 123208 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82432 kB' 'Slab: 161480 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.256 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.257 node0=1025 expecting 1025 00:05:27.257 ************************************ 00:05:27.257 END TEST odd_alloc 00:05:27.257 ************************************ 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:27.257 00:05:27.257 real 0m0.539s 00:05:27.257 user 0m0.246s 00:05:27.257 sys 0m0.309s 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.257 04:01:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:27.257 04:01:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:27.257 04:01:20 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:27.257 04:01:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.257 04:01:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.257 04:01:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:27.257 ************************************ 00:05:27.257 START TEST custom_alloc 00:05:27.257 ************************************ 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.257 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.258 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.829 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.829 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.829 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:27.829 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:27.829 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.829 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.829 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.829 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7712064 kB' 'MemAvailable: 10499608 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 453272 kB' 'Inactive: 2662368 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123464 kB' 'Mapped: 48612 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161484 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79052 kB' 'KernelStack: 6468 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.830 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7712064 kB' 'MemAvailable: 10499608 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 453092 kB' 'Inactive: 2662368 kB' 'Active(anon): 132192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123336 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161488 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79056 kB' 'KernelStack: 6512 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.831 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.832 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:27.833 04:01:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7712064 kB' 'MemAvailable: 10499608 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 452860 kB' 'Inactive: 2662368 kB' 'Active(anon): 131960 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161484 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79052 kB' 'KernelStack: 6512 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.833 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.834 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.835 nr_hugepages=512 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:27.835 resv_hugepages=0 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.835 surplus_hugepages=0 00:05:27.835 anon_hugepages=0 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.835 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7712064 kB' 'MemAvailable: 10499608 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 453444 kB' 'Inactive: 2662368 kB' 'Active(anon): 132544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123696 kB' 'Mapped: 49124 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161484 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79052 kB' 'KernelStack: 6512 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 365292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.836 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.837 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7711560 kB' 'MemUsed: 4530412 kB' 'SwapCached: 0 kB' 'Active: 453104 kB' 'Inactive: 2662368 kB' 'Active(anon): 132204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2993736 kB' 'Mapped: 48604 kB' 'AnonPages: 123396 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82432 kB' 'Slab: 161452 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.838 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.839 node0=512 expecting 512 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:27.839 00:05:27.839 real 0m0.537s 00:05:27.839 user 0m0.280s 00:05:27.839 sys 0m0.264s 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.839 ************************************ 00:05:27.839 END TEST custom_alloc 00:05:27.839 ************************************ 00:05:27.839 04:01:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:27.839 04:01:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:27.839 04:01:21 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:27.839 04:01:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.839 04:01:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.839 04:01:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:27.839 ************************************ 00:05:27.839 START TEST no_shrink_alloc 00:05:27.839 ************************************ 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.839 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.409 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.409 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.409 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6660128 kB' 'MemAvailable: 9447672 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 453420 kB' 'Inactive: 2662368 kB' 'Active(anon): 132520 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123708 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161468 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79036 kB' 'KernelStack: 6532 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6659372 kB' 'MemAvailable: 9446916 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 452988 kB' 'Inactive: 2662368 kB' 'Active(anon): 132088 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123552 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161456 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79024 kB' 'KernelStack: 6484 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.410 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.411 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6659372 kB' 'MemAvailable: 9446916 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 452820 kB' 'Inactive: 2662368 kB' 'Active(anon): 131920 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161452 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79020 kB' 'KernelStack: 6496 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.412 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:28.413 nr_hugepages=1024 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:28.413 resv_hugepages=0 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.413 surplus_hugepages=0 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.413 anon_hugepages=0 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6659372 kB' 'MemAvailable: 9446916 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 452864 kB' 'Inactive: 2662368 kB' 'Active(anon): 131964 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 82432 kB' 'Slab: 161452 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79020 kB' 'KernelStack: 6512 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.413 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6661796 kB' 'MemUsed: 5580176 kB' 'SwapCached: 0 kB' 'Active: 452796 kB' 'Inactive: 2662368 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2993736 kB' 'Mapped: 48604 kB' 'AnonPages: 123288 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82432 kB' 'Slab: 161452 kB' 'SReclaimable: 82432 kB' 'SUnreclaim: 79020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.414 node0=1024 expecting 1024 00:05:28.414 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:28.415 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:28.415 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:28.415 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:28.415 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:28.415 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.415 04:01:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.942 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.942 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:28.942 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6655800 kB' 'MemAvailable: 9443336 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 448200 kB' 'Inactive: 2662368 kB' 'Active(anon): 127300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118436 kB' 'Mapped: 47944 kB' 'Shmem: 10468 kB' 'KReclaimable: 82416 kB' 'Slab: 161268 kB' 'SReclaimable: 82416 kB' 'SUnreclaim: 78852 kB' 'KernelStack: 6392 kB' 'PageTables: 3624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.942 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6655800 kB' 'MemAvailable: 9443336 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 448128 kB' 'Inactive: 2662368 kB' 'Active(anon): 127228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118372 kB' 'Mapped: 47880 kB' 'Shmem: 10468 kB' 'KReclaimable: 82416 kB' 'Slab: 161216 kB' 'SReclaimable: 82416 kB' 'SUnreclaim: 78800 kB' 'KernelStack: 6400 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.943 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.944 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6655800 kB' 'MemAvailable: 9443336 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 447844 kB' 'Inactive: 2662368 kB' 'Active(anon): 126944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118112 kB' 'Mapped: 47880 kB' 'Shmem: 10468 kB' 'KReclaimable: 82416 kB' 'Slab: 161216 kB' 'SReclaimable: 82416 kB' 'SUnreclaim: 78800 kB' 'KernelStack: 6400 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.945 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.946 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:28.947 nr_hugepages=1024 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:28.947 resv_hugepages=0 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:28.947 surplus_hugepages=0 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:28.947 anon_hugepages=0 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.947 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6655800 kB' 'MemAvailable: 9443336 kB' 'Buffers: 2436 kB' 'Cached: 2991300 kB' 'SwapCached: 0 kB' 'Active: 448104 kB' 'Inactive: 2662368 kB' 'Active(anon): 127204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118372 kB' 'Mapped: 47880 kB' 'Shmem: 10468 kB' 'KReclaimable: 82416 kB' 'Slab: 161216 kB' 'SReclaimable: 82416 kB' 'SUnreclaim: 78800 kB' 'KernelStack: 6400 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.948 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.949 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6655800 kB' 'MemUsed: 5586172 kB' 'SwapCached: 0 kB' 'Active: 448092 kB' 'Inactive: 2662368 kB' 'Active(anon): 127192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2662368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2993736 kB' 'Mapped: 47880 kB' 'AnonPages: 118372 kB' 'Shmem: 10468 kB' 'KernelStack: 6400 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82416 kB' 'Slab: 161216 kB' 'SReclaimable: 82416 kB' 'SUnreclaim: 78800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.950 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:28.951 node0=1024 expecting 1024 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:28.951 00:05:28.951 real 0m1.070s 00:05:28.951 user 0m0.535s 00:05:28.951 sys 0m0.575s 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.951 04:01:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:28.951 ************************************ 00:05:28.951 END TEST no_shrink_alloc 00:05:28.951 ************************************ 00:05:28.951 04:01:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:28.951 04:01:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:28.951 00:05:28.951 real 0m4.661s 00:05:28.951 user 0m2.211s 00:05:28.951 sys 0m2.459s 00:05:28.951 04:01:22 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.951 04:01:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.951 ************************************ 00:05:28.951 END TEST hugepages 00:05:28.951 ************************************ 00:05:29.209 04:01:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:29.209 04:01:22 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:29.209 04:01:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.209 04:01:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.209 04:01:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:29.209 ************************************ 00:05:29.209 START TEST driver 00:05:29.209 ************************************ 00:05:29.209 04:01:22 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:29.209 * Looking for test storage... 00:05:29.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.209 04:01:22 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:29.209 04:01:22 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.209 04:01:22 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.775 04:01:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:29.775 04:01:22 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.775 04:01:22 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.775 04:01:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:29.775 ************************************ 00:05:29.775 START TEST guess_driver 00:05:29.775 ************************************ 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:29.775 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:29.775 Looking for driver=uio_pci_generic 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.775 04:01:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.342 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:30.342 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:30.342 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.601 04:01:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.168 00:05:31.168 real 0m1.413s 00:05:31.168 user 0m0.563s 00:05:31.168 sys 0m0.872s 00:05:31.168 04:01:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.168 04:01:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:31.168 ************************************ 00:05:31.168 END TEST guess_driver 00:05:31.168 ************************************ 00:05:31.168 04:01:24 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:31.168 00:05:31.168 real 0m2.123s 00:05:31.168 user 0m0.807s 00:05:31.168 sys 0m1.382s 00:05:31.168 04:01:24 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.168 04:01:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:31.168 ************************************ 00:05:31.168 END TEST driver 00:05:31.168 ************************************ 00:05:31.168 04:01:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:31.168 04:01:24 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:31.168 04:01:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.168 04:01:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.168 04:01:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.168 ************************************ 00:05:31.168 START TEST devices 00:05:31.168 ************************************ 00:05:31.168 04:01:24 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:31.426 * Looking for test storage... 00:05:31.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:31.426 04:01:24 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:31.426 04:01:24 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:31.426 04:01:24 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:31.426 04:01:24 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.993 04:01:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:31.993 04:01:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:31.994 04:01:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:31.994 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:31.994 04:01:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:31.994 04:01:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:32.253 No valid GPT data, bailing 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:32.253 No valid GPT data, bailing 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:32.253 No valid GPT data, bailing 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:32.253 04:01:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:32.253 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:32.253 04:01:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:32.253 No valid GPT data, bailing 00:05:32.523 04:01:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:32.523 04:01:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:32.523 04:01:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:32.523 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:32.523 04:01:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:32.523 04:01:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:32.523 04:01:25 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:32.523 04:01:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:32.523 04:01:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:32.523 04:01:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:32.523 04:01:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:32.523 04:01:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:32.523 04:01:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:32.523 04:01:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.523 04:01:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.523 04:01:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:32.523 ************************************ 00:05:32.523 START TEST nvme_mount 00:05:32.523 ************************************ 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:32.523 04:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:33.465 Creating new GPT entries in memory. 00:05:33.465 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:33.465 other utilities. 00:05:33.465 04:01:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:33.465 04:01:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.465 04:01:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:33.465 04:01:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:33.465 04:01:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:34.400 Creating new GPT entries in memory. 00:05:34.400 The operation has completed successfully. 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 70671 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.400 04:01:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.658 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:34.658 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:34.658 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:34.658 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.658 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:34.658 04:01:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:34.915 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.915 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.173 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:35.173 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:35.173 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:35.173 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:35.173 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:35.173 04:01:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:35.173 04:01:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.173 04:01:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:35.173 04:01:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:35.173 04:01:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.451 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.728 04:01:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.728 04:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.986 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.986 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:35.986 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.986 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.986 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:35.986 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.251 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:36.251 00:05:36.251 real 0m3.959s 00:05:36.251 user 0m0.659s 00:05:36.251 sys 0m1.049s 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.251 04:01:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:36.251 ************************************ 00:05:36.251 END TEST nvme_mount 00:05:36.251 ************************************ 00:05:36.510 04:01:29 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:36.510 04:01:29 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:36.510 04:01:29 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.510 04:01:29 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.510 04:01:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:36.510 ************************************ 00:05:36.510 START TEST dm_mount 00:05:36.510 ************************************ 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:36.510 04:01:29 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:37.443 Creating new GPT entries in memory. 00:05:37.443 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:37.443 other utilities. 00:05:37.443 04:01:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:37.443 04:01:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:37.443 04:01:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:37.443 04:01:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:37.443 04:01:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:38.378 Creating new GPT entries in memory. 00:05:38.378 The operation has completed successfully. 00:05:38.378 04:01:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:38.378 04:01:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.378 04:01:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:38.378 04:01:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:38.378 04:01:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:39.754 The operation has completed successfully. 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71102 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.754 04:01:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:39.754 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:39.754 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:39.754 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:39.754 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.754 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:39.754 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.012 04:01:33 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.270 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.270 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:40.270 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:40.270 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.270 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.270 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:40.528 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:40.788 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:40.788 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:40.788 04:01:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:40.788 00:05:40.788 real 0m4.249s 00:05:40.788 user 0m0.482s 00:05:40.788 sys 0m0.733s 00:05:40.788 04:01:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.788 04:01:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:40.788 ************************************ 00:05:40.788 END TEST dm_mount 00:05:40.788 ************************************ 00:05:40.788 04:01:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:40.788 04:01:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:40.788 04:01:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:40.788 04:01:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:40.788 04:01:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:40.788 04:01:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:40.788 04:01:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:40.788 04:01:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:41.047 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:41.047 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:41.047 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:41.047 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:41.047 04:01:34 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:41.047 04:01:34 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:41.047 04:01:34 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:41.047 04:01:34 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.047 04:01:34 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:41.047 04:01:34 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:41.047 04:01:34 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:41.047 00:05:41.047 real 0m9.725s 00:05:41.047 user 0m1.760s 00:05:41.047 sys 0m2.374s 00:05:41.047 ************************************ 00:05:41.047 END TEST devices 00:05:41.047 04:01:34 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.047 04:01:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:41.047 ************************************ 00:05:41.047 04:01:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:41.047 00:05:41.047 real 0m21.624s 00:05:41.047 user 0m6.993s 00:05:41.047 sys 0m9.029s 00:05:41.047 04:01:34 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.047 04:01:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.047 ************************************ 00:05:41.047 END TEST setup.sh 00:05:41.047 ************************************ 00:05:41.047 04:01:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.047 04:01:34 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:41.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.628 Hugepages 00:05:41.628 node hugesize free / total 00:05:41.628 node0 1048576kB 0 / 0 00:05:41.628 node0 2048kB 2048 / 2048 00:05:41.628 00:05:41.628 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:41.934 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:41.934 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:41.935 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:41.935 04:01:35 -- spdk/autotest.sh@130 -- # uname -s 00:05:41.935 04:01:35 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:41.935 04:01:35 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:41.935 04:01:35 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.760 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.760 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.760 04:01:36 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:43.692 04:01:37 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:43.692 04:01:37 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:43.692 04:01:37 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:43.692 04:01:37 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:43.692 04:01:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:43.692 04:01:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:43.692 04:01:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.692 04:01:37 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.692 04:01:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:43.949 04:01:37 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:43.950 04:01:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:43.950 04:01:37 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.209 Waiting for block devices as requested 00:05:44.209 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:44.467 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:44.467 04:01:37 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:44.467 04:01:37 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:44.467 04:01:37 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:44.467 04:01:37 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:44.467 04:01:37 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:44.467 04:01:37 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:44.467 04:01:37 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:44.467 04:01:37 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:44.467 04:01:37 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:44.467 04:01:37 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:44.467 04:01:37 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:44.467 04:01:37 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:44.467 04:01:37 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:44.467 04:01:37 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:44.467 04:01:37 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:44.467 04:01:37 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:44.467 04:01:37 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:44.467 04:01:37 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:44.467 04:01:37 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:44.467 04:01:37 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:44.467 04:01:37 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:44.467 04:01:37 -- common/autotest_common.sh@1557 -- # continue 00:05:44.467 04:01:37 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:44.467 04:01:37 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:44.467 04:01:37 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:44.467 04:01:37 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:44.467 04:01:37 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:44.467 04:01:37 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:44.467 04:01:37 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:44.467 04:01:37 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:44.467 04:01:37 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:44.467 04:01:37 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:44.467 04:01:37 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:44.467 04:01:37 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:44.467 04:01:37 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:44.467 04:01:37 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:44.467 04:01:37 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:44.467 04:01:37 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:44.467 04:01:37 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:44.467 04:01:37 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:44.467 04:01:37 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:44.467 04:01:37 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:44.467 04:01:37 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:44.467 04:01:37 -- common/autotest_common.sh@1557 -- # continue 00:05:44.467 04:01:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:44.467 04:01:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:44.468 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.468 04:01:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:44.468 04:01:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.468 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.468 04:01:37 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.417 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.417 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.417 04:01:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:45.417 04:01:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.417 04:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.417 04:01:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:45.417 04:01:38 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:45.417 04:01:38 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:45.417 04:01:38 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:45.417 04:01:38 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:45.417 04:01:38 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:45.417 04:01:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:45.417 04:01:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:45.417 04:01:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.417 04:01:38 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:45.417 04:01:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:45.417 04:01:38 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:45.417 04:01:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:45.417 04:01:38 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:45.417 04:01:38 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:45.417 04:01:38 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:45.417 04:01:38 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:45.417 04:01:38 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:45.417 04:01:38 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:45.417 04:01:38 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:45.417 04:01:38 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:45.417 04:01:38 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:45.417 04:01:38 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:45.417 04:01:38 -- common/autotest_common.sh@1593 -- # return 0 00:05:45.417 04:01:38 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:45.417 04:01:38 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:45.417 04:01:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:45.417 04:01:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:45.417 04:01:38 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:45.417 04:01:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.417 04:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.417 04:01:38 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:05:45.417 04:01:38 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:45.417 04:01:38 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:45.417 04:01:38 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:45.417 04:01:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.417 04:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.417 04:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.417 ************************************ 00:05:45.417 START TEST env 00:05:45.417 ************************************ 00:05:45.417 04:01:38 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:45.676 * Looking for test storage... 00:05:45.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:45.676 04:01:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:45.676 04:01:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.676 04:01:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.676 04:01:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.676 ************************************ 00:05:45.676 START TEST env_memory 00:05:45.676 ************************************ 00:05:45.676 04:01:38 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:45.676 00:05:45.676 00:05:45.676 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.676 http://cunit.sourceforge.net/ 00:05:45.676 00:05:45.676 00:05:45.676 Suite: memory 00:05:45.676 Test: alloc and free memory map ...[2024-07-23 04:01:38.877752] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:45.676 passed 00:05:45.676 Test: mem map translation ...[2024-07-23 04:01:38.908749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:45.677 [2024-07-23 04:01:38.908795] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:45.677 [2024-07-23 04:01:38.908852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:45.677 [2024-07-23 04:01:38.908865] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:45.677 passed 00:05:45.677 Test: mem map registration ...[2024-07-23 04:01:38.972622] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:45.677 [2024-07-23 04:01:38.972661] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:45.677 passed 00:05:45.935 Test: mem map adjacent registrations ...passed 00:05:45.935 00:05:45.936 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.936 suites 1 1 n/a 0 0 00:05:45.936 tests 4 4 4 0 0 00:05:45.936 asserts 152 152 152 0 n/a 00:05:45.936 00:05:45.936 Elapsed time = 0.215 seconds 00:05:45.936 00:05:45.936 real 0m0.230s 00:05:45.936 user 0m0.215s 00:05:45.936 sys 0m0.012s 00:05:45.936 04:01:39 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.936 04:01:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:45.936 ************************************ 00:05:45.936 END TEST env_memory 00:05:45.936 ************************************ 00:05:45.936 04:01:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:45.936 04:01:39 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.936 04:01:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.936 04:01:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.936 04:01:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.936 ************************************ 00:05:45.936 START TEST env_vtophys 00:05:45.936 ************************************ 00:05:45.936 04:01:39 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.936 EAL: lib.eal log level changed from notice to debug 00:05:45.936 EAL: Detected lcore 0 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 1 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 2 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 3 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 4 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 5 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 6 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 7 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 8 as core 0 on socket 0 00:05:45.936 EAL: Detected lcore 9 as core 0 on socket 0 00:05:45.936 EAL: Maximum logical cores by configuration: 128 00:05:45.936 EAL: Detected CPU lcores: 10 00:05:45.936 EAL: Detected NUMA nodes: 1 00:05:45.936 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:05:45.936 EAL: Detected shared linkage of DPDK 00:05:45.936 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:05:45.936 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:05:45.936 EAL: Registered [vdev] bus. 00:05:45.936 EAL: bus.vdev log level changed from disabled to notice 00:05:45.936 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:05:45.936 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:05:45.936 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:45.936 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:45.936 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:05:45.936 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:05:45.936 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:05:45.936 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:05:45.936 EAL: No shared files mode enabled, IPC will be disabled 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Selected IOVA mode 'PA' 00:05:45.936 EAL: Probing VFIO support... 00:05:45.936 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.936 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:45.936 EAL: Ask a virtual area of 0x2e000 bytes 00:05:45.936 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:45.936 EAL: Setting up physically contiguous memory... 00:05:45.936 EAL: Setting maximum number of open files to 524288 00:05:45.936 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:45.936 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:45.936 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.936 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:45.936 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.936 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.936 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:45.936 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:45.936 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.936 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:45.936 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.936 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.936 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:45.936 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:45.936 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.936 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:45.936 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.936 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.936 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:45.936 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:45.936 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.936 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:45.936 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.936 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.936 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:45.936 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:45.936 EAL: Hugepages will be freed exactly as allocated. 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: TSC frequency is ~2200000 KHz 00:05:45.936 EAL: Main lcore 0 is ready (tid=7f05a7092a00;cpuset=[0]) 00:05:45.936 EAL: Trying to obtain current memory policy. 00:05:45.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.936 EAL: Restoring previous memory policy: 0 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was expanded by 2MB 00:05:45.936 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Mem event callback 'spdk:(nil)' registered 00:05:45.936 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:45.936 00:05:45.936 00:05:45.936 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.936 http://cunit.sourceforge.net/ 00:05:45.936 00:05:45.936 00:05:45.936 Suite: components_suite 00:05:45.936 Test: vtophys_malloc_test ...passed 00:05:45.936 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:45.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.936 EAL: Restoring previous memory policy: 4 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was expanded by 4MB 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was shrunk by 4MB 00:05:45.936 EAL: Trying to obtain current memory policy. 00:05:45.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.936 EAL: Restoring previous memory policy: 4 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was expanded by 6MB 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was shrunk by 6MB 00:05:45.936 EAL: Trying to obtain current memory policy. 00:05:45.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.936 EAL: Restoring previous memory policy: 4 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was expanded by 10MB 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was shrunk by 10MB 00:05:45.936 EAL: Trying to obtain current memory policy. 00:05:45.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.936 EAL: Restoring previous memory policy: 4 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was expanded by 18MB 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was shrunk by 18MB 00:05:45.936 EAL: Trying to obtain current memory policy. 00:05:45.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.195 EAL: Restoring previous memory policy: 4 00:05:46.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.195 EAL: request: mp_malloc_sync 00:05:46.195 EAL: No shared files mode enabled, IPC is disabled 00:05:46.195 EAL: Heap on socket 0 was expanded by 34MB 00:05:46.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.195 EAL: request: mp_malloc_sync 00:05:46.195 EAL: No shared files mode enabled, IPC is disabled 00:05:46.195 EAL: Heap on socket 0 was shrunk by 34MB 00:05:46.195 EAL: Trying to obtain current memory policy. 00:05:46.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.195 EAL: Restoring previous memory policy: 4 00:05:46.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.195 EAL: request: mp_malloc_sync 00:05:46.195 EAL: No shared files mode enabled, IPC is disabled 00:05:46.195 EAL: Heap on socket 0 was expanded by 66MB 00:05:46.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.195 EAL: request: mp_malloc_sync 00:05:46.195 EAL: No shared files mode enabled, IPC is disabled 00:05:46.195 EAL: Heap on socket 0 was shrunk by 66MB 00:05:46.195 EAL: Trying to obtain current memory policy. 00:05:46.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.195 EAL: Restoring previous memory policy: 4 00:05:46.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.195 EAL: request: mp_malloc_sync 00:05:46.195 EAL: No shared files mode enabled, IPC is disabled 00:05:46.195 EAL: Heap on socket 0 was expanded by 130MB 00:05:46.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.195 EAL: request: mp_malloc_sync 00:05:46.195 EAL: No shared files mode enabled, IPC is disabled 00:05:46.195 EAL: Heap on socket 0 was shrunk by 130MB 00:05:46.195 EAL: Trying to obtain current memory policy. 00:05:46.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.195 EAL: Restoring previous memory policy: 4 00:05:46.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.195 EAL: request: mp_malloc_sync 00:05:46.195 EAL: No shared files mode enabled, IPC is disabled 00:05:46.195 EAL: Heap on socket 0 was expanded by 258MB 00:05:46.195 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.453 EAL: request: mp_malloc_sync 00:05:46.453 EAL: No shared files mode enabled, IPC is disabled 00:05:46.453 EAL: Heap on socket 0 was shrunk by 258MB 00:05:46.453 EAL: Trying to obtain current memory policy. 00:05:46.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.453 EAL: Restoring previous memory policy: 4 00:05:46.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.453 EAL: request: mp_malloc_sync 00:05:46.453 EAL: No shared files mode enabled, IPC is disabled 00:05:46.453 EAL: Heap on socket 0 was expanded by 514MB 00:05:46.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.716 EAL: request: mp_malloc_sync 00:05:46.716 EAL: No shared files mode enabled, IPC is disabled 00:05:46.716 EAL: Heap on socket 0 was shrunk by 514MB 00:05:46.716 EAL: Trying to obtain current memory policy. 00:05:46.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.007 EAL: Restoring previous memory policy: 4 00:05:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.007 EAL: request: mp_malloc_sync 00:05:47.007 EAL: No shared files mode enabled, IPC is disabled 00:05:47.007 EAL: Heap on socket 0 was expanded by 1026MB 00:05:47.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.265 EAL: request: mp_malloc_sync 00:05:47.265 EAL: No shared files mode enabled, IPC is disabled 00:05:47.265 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:47.265 passed 00:05:47.265 00:05:47.265 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.265 suites 1 1 n/a 0 0 00:05:47.265 tests 2 2 2 0 0 00:05:47.265 asserts 5330 5330 5330 0 n/a 00:05:47.265 00:05:47.265 Elapsed time = 1.212 seconds 00:05:47.265 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.265 EAL: request: mp_malloc_sync 00:05:47.265 EAL: No shared files mode enabled, IPC is disabled 00:05:47.265 EAL: Heap on socket 0 was shrunk by 2MB 00:05:47.265 EAL: No shared files mode enabled, IPC is disabled 00:05:47.265 EAL: No shared files mode enabled, IPC is disabled 00:05:47.265 EAL: No shared files mode enabled, IPC is disabled 00:05:47.265 00:05:47.265 real 0m1.400s 00:05:47.265 user 0m0.771s 00:05:47.265 sys 0m0.496s 00:05:47.265 04:01:40 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.265 04:01:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:47.265 ************************************ 00:05:47.265 END TEST env_vtophys 00:05:47.265 ************************************ 00:05:47.265 04:01:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:47.265 04:01:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.265 04:01:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.265 04:01:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.265 04:01:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.265 ************************************ 00:05:47.265 START TEST env_pci 00:05:47.265 ************************************ 00:05:47.265 04:01:40 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.265 00:05:47.265 00:05:47.265 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.265 http://cunit.sourceforge.net/ 00:05:47.265 00:05:47.265 00:05:47.265 Suite: pci 00:05:47.265 Test: pci_hook ...[2024-07-23 04:01:40.576984] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72302 has claimed it 00:05:47.265 passed 00:05:47.265 00:05:47.265 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.265 suites 1 1 n/a 0 0 00:05:47.265 tests 1 1 1 0 0 00:05:47.265 asserts 25 25 25 0 n/a 00:05:47.265 00:05:47.265 Elapsed time = 0.002 seconds 00:05:47.265 EAL: Cannot find device (10000:00:01.0) 00:05:47.265 EAL: Failed to attach device on primary process 00:05:47.265 00:05:47.265 real 0m0.016s 00:05:47.265 user 0m0.008s 00:05:47.265 sys 0m0.008s 00:05:47.265 04:01:40 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.265 04:01:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:47.265 ************************************ 00:05:47.265 END TEST env_pci 00:05:47.265 ************************************ 00:05:47.523 04:01:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:47.523 04:01:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:47.523 04:01:40 env -- env/env.sh@15 -- # uname 00:05:47.523 04:01:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:47.523 04:01:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:47.523 04:01:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.523 04:01:40 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:47.523 04:01:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.523 04:01:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.523 ************************************ 00:05:47.523 START TEST env_dpdk_post_init 00:05:47.523 ************************************ 00:05:47.523 04:01:40 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.523 EAL: Detected CPU lcores: 10 00:05:47.523 EAL: Detected NUMA nodes: 1 00:05:47.523 EAL: Detected shared linkage of DPDK 00:05:47.523 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.523 EAL: Selected IOVA mode 'PA' 00:05:47.523 Starting DPDK initialization... 00:05:47.523 Starting SPDK post initialization... 00:05:47.523 SPDK NVMe probe 00:05:47.523 Attaching to 0000:00:10.0 00:05:47.523 Attaching to 0000:00:11.0 00:05:47.523 Attached to 0000:00:10.0 00:05:47.523 Attached to 0000:00:11.0 00:05:47.523 Cleaning up... 00:05:47.523 00:05:47.524 real 0m0.158s 00:05:47.524 user 0m0.034s 00:05:47.524 sys 0m0.025s 00:05:47.524 04:01:40 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.524 04:01:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.524 ************************************ 00:05:47.524 END TEST env_dpdk_post_init 00:05:47.524 ************************************ 00:05:47.524 04:01:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:47.524 04:01:40 env -- env/env.sh@26 -- # uname 00:05:47.524 04:01:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.524 04:01:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.524 04:01:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.524 04:01:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.524 04:01:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.524 ************************************ 00:05:47.524 START TEST env_mem_callbacks 00:05:47.524 ************************************ 00:05:47.524 04:01:40 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.782 EAL: Detected CPU lcores: 10 00:05:47.782 EAL: Detected NUMA nodes: 1 00:05:47.782 EAL: Detected shared linkage of DPDK 00:05:47.782 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.782 EAL: Selected IOVA mode 'PA' 00:05:47.782 00:05:47.782 00:05:47.782 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.782 http://cunit.sourceforge.net/ 00:05:47.782 00:05:47.782 00:05:47.782 Suite: memory 00:05:47.782 Test: test ... 00:05:47.782 register 0x200000200000 2097152 00:05:47.782 malloc 3145728 00:05:47.782 register 0x200000400000 4194304 00:05:47.782 buf 0x200000500000 len 3145728 PASSED 00:05:47.782 malloc 64 00:05:47.782 buf 0x2000004fff40 len 64 PASSED 00:05:47.782 malloc 4194304 00:05:47.782 register 0x200000800000 6291456 00:05:47.782 buf 0x200000a00000 len 4194304 PASSED 00:05:47.782 free 0x200000500000 3145728 00:05:47.782 free 0x2000004fff40 64 00:05:47.782 unregister 0x200000400000 4194304 PASSED 00:05:47.782 free 0x200000a00000 4194304 00:05:47.782 unregister 0x200000800000 6291456 PASSED 00:05:47.782 malloc 8388608 00:05:47.782 register 0x200000400000 10485760 00:05:47.782 buf 0x200000600000 len 8388608 PASSED 00:05:47.782 free 0x200000600000 8388608 00:05:47.782 unregister 0x200000400000 10485760 PASSED 00:05:47.782 passed 00:05:47.782 00:05:47.782 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.782 suites 1 1 n/a 0 0 00:05:47.782 tests 1 1 1 0 0 00:05:47.782 asserts 15 15 15 0 n/a 00:05:47.782 00:05:47.782 Elapsed time = 0.009 seconds 00:05:47.782 00:05:47.782 real 0m0.144s 00:05:47.782 user 0m0.017s 00:05:47.782 sys 0m0.025s 00:05:47.782 04:01:40 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.782 04:01:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:47.782 ************************************ 00:05:47.782 END TEST env_mem_callbacks 00:05:47.782 ************************************ 00:05:47.782 04:01:41 env -- common/autotest_common.sh@1142 -- # return 0 00:05:47.782 00:05:47.782 real 0m2.302s 00:05:47.782 user 0m1.154s 00:05:47.782 sys 0m0.788s 00:05:47.782 04:01:41 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.782 04:01:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.782 ************************************ 00:05:47.782 END TEST env 00:05:47.782 ************************************ 00:05:47.782 04:01:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.782 04:01:41 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.782 04:01:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.782 04:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.782 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:47.782 ************************************ 00:05:47.782 START TEST rpc 00:05:47.782 ************************************ 00:05:47.782 04:01:41 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:48.040 * Looking for test storage... 00:05:48.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.040 04:01:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=72411 00:05:48.040 04:01:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.040 04:01:41 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:48.040 04:01:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 72411 00:05:48.040 04:01:41 rpc -- common/autotest_common.sh@829 -- # '[' -z 72411 ']' 00:05:48.040 04:01:41 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.040 04:01:41 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.040 04:01:41 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.041 04:01:41 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.041 04:01:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.041 [2024-07-23 04:01:41.247495] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:48.041 [2024-07-23 04:01:41.247589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72411 ] 00:05:48.041 [2024-07-23 04:01:41.370923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.299 [2024-07-23 04:01:41.388627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.299 [2024-07-23 04:01:41.451354] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:48.299 [2024-07-23 04:01:41.451410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 72411' to capture a snapshot of events at runtime. 00:05:48.299 [2024-07-23 04:01:41.451422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.299 [2024-07-23 04:01:41.451430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.299 [2024-07-23 04:01:41.451437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid72411 for offline analysis/debug. 00:05:48.299 [2024-07-23 04:01:41.451468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.299 [2024-07-23 04:01:41.504358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:49.235 04:01:42 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.235 04:01:42 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:49.235 04:01:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.235 04:01:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.235 04:01:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:49.235 04:01:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:49.235 04:01:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.235 04:01:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.235 04:01:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.235 ************************************ 00:05:49.235 START TEST rpc_integrity 00:05:49.235 ************************************ 00:05:49.235 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:49.235 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.235 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.235 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.235 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.235 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.235 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.235 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.235 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.235 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.235 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.235 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.235 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:49.235 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.235 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.236 { 00:05:49.236 "name": "Malloc0", 00:05:49.236 "aliases": [ 00:05:49.236 "e9aa0d55-7168-4060-9fe4-2392883285b6" 00:05:49.236 ], 00:05:49.236 "product_name": "Malloc disk", 00:05:49.236 "block_size": 512, 00:05:49.236 "num_blocks": 16384, 00:05:49.236 "uuid": "e9aa0d55-7168-4060-9fe4-2392883285b6", 00:05:49.236 "assigned_rate_limits": { 00:05:49.236 "rw_ios_per_sec": 0, 00:05:49.236 "rw_mbytes_per_sec": 0, 00:05:49.236 "r_mbytes_per_sec": 0, 00:05:49.236 "w_mbytes_per_sec": 0 00:05:49.236 }, 00:05:49.236 "claimed": false, 00:05:49.236 "zoned": false, 00:05:49.236 "supported_io_types": { 00:05:49.236 "read": true, 00:05:49.236 "write": true, 00:05:49.236 "unmap": true, 00:05:49.236 "flush": true, 00:05:49.236 "reset": true, 00:05:49.236 "nvme_admin": false, 00:05:49.236 "nvme_io": false, 00:05:49.236 "nvme_io_md": false, 00:05:49.236 "write_zeroes": true, 00:05:49.236 "zcopy": true, 00:05:49.236 "get_zone_info": false, 00:05:49.236 "zone_management": false, 00:05:49.236 "zone_append": false, 00:05:49.236 "compare": false, 00:05:49.236 "compare_and_write": false, 00:05:49.236 "abort": true, 00:05:49.236 "seek_hole": false, 00:05:49.236 "seek_data": false, 00:05:49.236 "copy": true, 00:05:49.236 "nvme_iov_md": false 00:05:49.236 }, 00:05:49.236 "memory_domains": [ 00:05:49.236 { 00:05:49.236 "dma_device_id": "system", 00:05:49.236 "dma_device_type": 1 00:05:49.236 }, 00:05:49.236 { 00:05:49.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.236 "dma_device_type": 2 00:05:49.236 } 00:05:49.236 ], 00:05:49.236 "driver_specific": {} 00:05:49.236 } 00:05:49.236 ]' 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.236 [2024-07-23 04:01:42.411109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:49.236 [2024-07-23 04:01:42.411154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.236 [2024-07-23 04:01:42.411172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17dda80 00:05:49.236 [2024-07-23 04:01:42.411181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.236 [2024-07-23 04:01:42.412657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.236 [2024-07-23 04:01:42.412688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.236 Passthru0 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.236 { 00:05:49.236 "name": "Malloc0", 00:05:49.236 "aliases": [ 00:05:49.236 "e9aa0d55-7168-4060-9fe4-2392883285b6" 00:05:49.236 ], 00:05:49.236 "product_name": "Malloc disk", 00:05:49.236 "block_size": 512, 00:05:49.236 "num_blocks": 16384, 00:05:49.236 "uuid": "e9aa0d55-7168-4060-9fe4-2392883285b6", 00:05:49.236 "assigned_rate_limits": { 00:05:49.236 "rw_ios_per_sec": 0, 00:05:49.236 "rw_mbytes_per_sec": 0, 00:05:49.236 "r_mbytes_per_sec": 0, 00:05:49.236 "w_mbytes_per_sec": 0 00:05:49.236 }, 00:05:49.236 "claimed": true, 00:05:49.236 "claim_type": "exclusive_write", 00:05:49.236 "zoned": false, 00:05:49.236 "supported_io_types": { 00:05:49.236 "read": true, 00:05:49.236 "write": true, 00:05:49.236 "unmap": true, 00:05:49.236 "flush": true, 00:05:49.236 "reset": true, 00:05:49.236 "nvme_admin": false, 00:05:49.236 "nvme_io": false, 00:05:49.236 "nvme_io_md": false, 00:05:49.236 "write_zeroes": true, 00:05:49.236 "zcopy": true, 00:05:49.236 "get_zone_info": false, 00:05:49.236 "zone_management": false, 00:05:49.236 "zone_append": false, 00:05:49.236 "compare": false, 00:05:49.236 "compare_and_write": false, 00:05:49.236 "abort": true, 00:05:49.236 "seek_hole": false, 00:05:49.236 "seek_data": false, 00:05:49.236 "copy": true, 00:05:49.236 "nvme_iov_md": false 00:05:49.236 }, 00:05:49.236 "memory_domains": [ 00:05:49.236 { 00:05:49.236 "dma_device_id": "system", 00:05:49.236 "dma_device_type": 1 00:05:49.236 }, 00:05:49.236 { 00:05:49.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.236 "dma_device_type": 2 00:05:49.236 } 00:05:49.236 ], 00:05:49.236 "driver_specific": {} 00:05:49.236 }, 00:05:49.236 { 00:05:49.236 "name": "Passthru0", 00:05:49.236 "aliases": [ 00:05:49.236 "1e100c5d-e0b6-58d8-b37d-98d3ab2acb5d" 00:05:49.236 ], 00:05:49.236 "product_name": "passthru", 00:05:49.236 "block_size": 512, 00:05:49.236 "num_blocks": 16384, 00:05:49.236 "uuid": "1e100c5d-e0b6-58d8-b37d-98d3ab2acb5d", 00:05:49.236 "assigned_rate_limits": { 00:05:49.236 "rw_ios_per_sec": 0, 00:05:49.236 "rw_mbytes_per_sec": 0, 00:05:49.236 "r_mbytes_per_sec": 0, 00:05:49.236 "w_mbytes_per_sec": 0 00:05:49.236 }, 00:05:49.236 "claimed": false, 00:05:49.236 "zoned": false, 00:05:49.236 "supported_io_types": { 00:05:49.236 "read": true, 00:05:49.236 "write": true, 00:05:49.236 "unmap": true, 00:05:49.236 "flush": true, 00:05:49.236 "reset": true, 00:05:49.236 "nvme_admin": false, 00:05:49.236 "nvme_io": false, 00:05:49.236 "nvme_io_md": false, 00:05:49.236 "write_zeroes": true, 00:05:49.236 "zcopy": true, 00:05:49.236 "get_zone_info": false, 00:05:49.236 "zone_management": false, 00:05:49.236 "zone_append": false, 00:05:49.236 "compare": false, 00:05:49.236 "compare_and_write": false, 00:05:49.236 "abort": true, 00:05:49.236 "seek_hole": false, 00:05:49.236 "seek_data": false, 00:05:49.236 "copy": true, 00:05:49.236 "nvme_iov_md": false 00:05:49.236 }, 00:05:49.236 "memory_domains": [ 00:05:49.236 { 00:05:49.236 "dma_device_id": "system", 00:05:49.236 "dma_device_type": 1 00:05:49.236 }, 00:05:49.236 { 00:05:49.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.236 "dma_device_type": 2 00:05:49.236 } 00:05:49.236 ], 00:05:49.236 "driver_specific": { 00:05:49.236 "passthru": { 00:05:49.236 "name": "Passthru0", 00:05:49.236 "base_bdev_name": "Malloc0" 00:05:49.236 } 00:05:49.236 } 00:05:49.236 } 00:05:49.236 ]' 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.236 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.236 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.495 04:01:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.495 00:05:49.495 real 0m0.337s 00:05:49.495 user 0m0.239s 00:05:49.495 sys 0m0.032s 00:05:49.495 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.495 04:01:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.495 ************************************ 00:05:49.495 END TEST rpc_integrity 00:05:49.495 ************************************ 00:05:49.495 04:01:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.495 04:01:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:49.495 04:01:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.495 04:01:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.495 04:01:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.495 ************************************ 00:05:49.495 START TEST rpc_plugins 00:05:49.495 ************************************ 00:05:49.495 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:49.495 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:49.495 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.495 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.495 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.495 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:49.495 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:49.495 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.495 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.495 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.495 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:49.495 { 00:05:49.495 "name": "Malloc1", 00:05:49.495 "aliases": [ 00:05:49.495 "ed829f10-7a60-4f20-ad64-3b45fded763a" 00:05:49.495 ], 00:05:49.495 "product_name": "Malloc disk", 00:05:49.495 "block_size": 4096, 00:05:49.495 "num_blocks": 256, 00:05:49.495 "uuid": "ed829f10-7a60-4f20-ad64-3b45fded763a", 00:05:49.495 "assigned_rate_limits": { 00:05:49.495 "rw_ios_per_sec": 0, 00:05:49.495 "rw_mbytes_per_sec": 0, 00:05:49.495 "r_mbytes_per_sec": 0, 00:05:49.495 "w_mbytes_per_sec": 0 00:05:49.495 }, 00:05:49.495 "claimed": false, 00:05:49.495 "zoned": false, 00:05:49.495 "supported_io_types": { 00:05:49.495 "read": true, 00:05:49.495 "write": true, 00:05:49.495 "unmap": true, 00:05:49.495 "flush": true, 00:05:49.495 "reset": true, 00:05:49.495 "nvme_admin": false, 00:05:49.495 "nvme_io": false, 00:05:49.495 "nvme_io_md": false, 00:05:49.495 "write_zeroes": true, 00:05:49.495 "zcopy": true, 00:05:49.495 "get_zone_info": false, 00:05:49.495 "zone_management": false, 00:05:49.495 "zone_append": false, 00:05:49.495 "compare": false, 00:05:49.495 "compare_and_write": false, 00:05:49.495 "abort": true, 00:05:49.495 "seek_hole": false, 00:05:49.495 "seek_data": false, 00:05:49.495 "copy": true, 00:05:49.495 "nvme_iov_md": false 00:05:49.495 }, 00:05:49.495 "memory_domains": [ 00:05:49.495 { 00:05:49.495 "dma_device_id": "system", 00:05:49.495 "dma_device_type": 1 00:05:49.495 }, 00:05:49.495 { 00:05:49.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.495 "dma_device_type": 2 00:05:49.495 } 00:05:49.495 ], 00:05:49.495 "driver_specific": {} 00:05:49.495 } 00:05:49.495 ]' 00:05:49.495 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:49.495 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:49.495 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:49.496 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.496 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.496 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.496 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:49.496 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.496 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.496 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.496 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:49.496 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:49.496 04:01:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:49.496 00:05:49.496 real 0m0.165s 00:05:49.496 user 0m0.107s 00:05:49.496 sys 0m0.021s 00:05:49.496 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.496 04:01:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.496 ************************************ 00:05:49.496 END TEST rpc_plugins 00:05:49.496 ************************************ 00:05:49.755 04:01:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.755 04:01:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:49.755 04:01:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.755 04:01:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.755 04:01:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.755 ************************************ 00:05:49.755 START TEST rpc_trace_cmd_test 00:05:49.755 ************************************ 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:49.755 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid72411", 00:05:49.755 "tpoint_group_mask": "0x8", 00:05:49.755 "iscsi_conn": { 00:05:49.755 "mask": "0x2", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "scsi": { 00:05:49.755 "mask": "0x4", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "bdev": { 00:05:49.755 "mask": "0x8", 00:05:49.755 "tpoint_mask": "0xffffffffffffffff" 00:05:49.755 }, 00:05:49.755 "nvmf_rdma": { 00:05:49.755 "mask": "0x10", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "nvmf_tcp": { 00:05:49.755 "mask": "0x20", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "ftl": { 00:05:49.755 "mask": "0x40", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "blobfs": { 00:05:49.755 "mask": "0x80", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "dsa": { 00:05:49.755 "mask": "0x200", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "thread": { 00:05:49.755 "mask": "0x400", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "nvme_pcie": { 00:05:49.755 "mask": "0x800", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "iaa": { 00:05:49.755 "mask": "0x1000", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "nvme_tcp": { 00:05:49.755 "mask": "0x2000", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "bdev_nvme": { 00:05:49.755 "mask": "0x4000", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 }, 00:05:49.755 "sock": { 00:05:49.755 "mask": "0x8000", 00:05:49.755 "tpoint_mask": "0x0" 00:05:49.755 } 00:05:49.755 }' 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.755 04:01:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.755 04:01:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.755 04:01:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.755 04:01:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.755 04:01:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:50.013 04:01:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:50.013 00:05:50.013 real 0m0.287s 00:05:50.013 user 0m0.246s 00:05:50.013 sys 0m0.031s 00:05:50.013 04:01:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.013 04:01:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.013 ************************************ 00:05:50.013 END TEST rpc_trace_cmd_test 00:05:50.013 ************************************ 00:05:50.013 04:01:43 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.013 04:01:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:50.013 04:01:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:50.013 04:01:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:50.013 04:01:43 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.013 04:01:43 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.013 04:01:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.013 ************************************ 00:05:50.013 START TEST rpc_daemon_integrity 00:05:50.013 ************************************ 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:50.013 { 00:05:50.013 "name": "Malloc2", 00:05:50.013 "aliases": [ 00:05:50.013 "f8c6fb12-8365-4a43-9253-b35d0869c2f2" 00:05:50.013 ], 00:05:50.013 "product_name": "Malloc disk", 00:05:50.013 "block_size": 512, 00:05:50.013 "num_blocks": 16384, 00:05:50.013 "uuid": "f8c6fb12-8365-4a43-9253-b35d0869c2f2", 00:05:50.013 "assigned_rate_limits": { 00:05:50.013 "rw_ios_per_sec": 0, 00:05:50.013 "rw_mbytes_per_sec": 0, 00:05:50.013 "r_mbytes_per_sec": 0, 00:05:50.013 "w_mbytes_per_sec": 0 00:05:50.013 }, 00:05:50.013 "claimed": false, 00:05:50.013 "zoned": false, 00:05:50.013 "supported_io_types": { 00:05:50.013 "read": true, 00:05:50.013 "write": true, 00:05:50.013 "unmap": true, 00:05:50.013 "flush": true, 00:05:50.013 "reset": true, 00:05:50.013 "nvme_admin": false, 00:05:50.013 "nvme_io": false, 00:05:50.013 "nvme_io_md": false, 00:05:50.013 "write_zeroes": true, 00:05:50.013 "zcopy": true, 00:05:50.013 "get_zone_info": false, 00:05:50.013 "zone_management": false, 00:05:50.013 "zone_append": false, 00:05:50.013 "compare": false, 00:05:50.013 "compare_and_write": false, 00:05:50.013 "abort": true, 00:05:50.013 "seek_hole": false, 00:05:50.013 "seek_data": false, 00:05:50.013 "copy": true, 00:05:50.013 "nvme_iov_md": false 00:05:50.013 }, 00:05:50.013 "memory_domains": [ 00:05:50.013 { 00:05:50.013 "dma_device_id": "system", 00:05:50.013 "dma_device_type": 1 00:05:50.013 }, 00:05:50.013 { 00:05:50.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.013 "dma_device_type": 2 00:05:50.013 } 00:05:50.013 ], 00:05:50.013 "driver_specific": {} 00:05:50.013 } 00:05:50.013 ]' 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.013 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.271 [2024-07-23 04:01:43.357727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:50.271 [2024-07-23 04:01:43.357780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:50.271 [2024-07-23 04:01:43.357799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17c9590 00:05:50.271 [2024-07-23 04:01:43.357810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:50.271 [2024-07-23 04:01:43.359382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:50.271 [2024-07-23 04:01:43.359410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:50.271 Passthru0 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:50.271 { 00:05:50.271 "name": "Malloc2", 00:05:50.271 "aliases": [ 00:05:50.271 "f8c6fb12-8365-4a43-9253-b35d0869c2f2" 00:05:50.271 ], 00:05:50.271 "product_name": "Malloc disk", 00:05:50.271 "block_size": 512, 00:05:50.271 "num_blocks": 16384, 00:05:50.271 "uuid": "f8c6fb12-8365-4a43-9253-b35d0869c2f2", 00:05:50.271 "assigned_rate_limits": { 00:05:50.271 "rw_ios_per_sec": 0, 00:05:50.271 "rw_mbytes_per_sec": 0, 00:05:50.271 "r_mbytes_per_sec": 0, 00:05:50.271 "w_mbytes_per_sec": 0 00:05:50.271 }, 00:05:50.271 "claimed": true, 00:05:50.271 "claim_type": "exclusive_write", 00:05:50.271 "zoned": false, 00:05:50.271 "supported_io_types": { 00:05:50.271 "read": true, 00:05:50.271 "write": true, 00:05:50.271 "unmap": true, 00:05:50.271 "flush": true, 00:05:50.271 "reset": true, 00:05:50.271 "nvme_admin": false, 00:05:50.271 "nvme_io": false, 00:05:50.271 "nvme_io_md": false, 00:05:50.271 "write_zeroes": true, 00:05:50.271 "zcopy": true, 00:05:50.271 "get_zone_info": false, 00:05:50.271 "zone_management": false, 00:05:50.271 "zone_append": false, 00:05:50.271 "compare": false, 00:05:50.271 "compare_and_write": false, 00:05:50.271 "abort": true, 00:05:50.271 "seek_hole": false, 00:05:50.271 "seek_data": false, 00:05:50.271 "copy": true, 00:05:50.271 "nvme_iov_md": false 00:05:50.271 }, 00:05:50.271 "memory_domains": [ 00:05:50.271 { 00:05:50.271 "dma_device_id": "system", 00:05:50.271 "dma_device_type": 1 00:05:50.271 }, 00:05:50.271 { 00:05:50.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.271 "dma_device_type": 2 00:05:50.271 } 00:05:50.271 ], 00:05:50.271 "driver_specific": {} 00:05:50.271 }, 00:05:50.271 { 00:05:50.271 "name": "Passthru0", 00:05:50.271 "aliases": [ 00:05:50.271 "08415547-20ec-5aaf-9412-bb6abc4e8f22" 00:05:50.271 ], 00:05:50.271 "product_name": "passthru", 00:05:50.271 "block_size": 512, 00:05:50.271 "num_blocks": 16384, 00:05:50.271 "uuid": "08415547-20ec-5aaf-9412-bb6abc4e8f22", 00:05:50.271 "assigned_rate_limits": { 00:05:50.271 "rw_ios_per_sec": 0, 00:05:50.271 "rw_mbytes_per_sec": 0, 00:05:50.271 "r_mbytes_per_sec": 0, 00:05:50.271 "w_mbytes_per_sec": 0 00:05:50.271 }, 00:05:50.271 "claimed": false, 00:05:50.271 "zoned": false, 00:05:50.271 "supported_io_types": { 00:05:50.271 "read": true, 00:05:50.271 "write": true, 00:05:50.271 "unmap": true, 00:05:50.271 "flush": true, 00:05:50.271 "reset": true, 00:05:50.271 "nvme_admin": false, 00:05:50.271 "nvme_io": false, 00:05:50.271 "nvme_io_md": false, 00:05:50.271 "write_zeroes": true, 00:05:50.271 "zcopy": true, 00:05:50.271 "get_zone_info": false, 00:05:50.271 "zone_management": false, 00:05:50.271 "zone_append": false, 00:05:50.271 "compare": false, 00:05:50.271 "compare_and_write": false, 00:05:50.271 "abort": true, 00:05:50.271 "seek_hole": false, 00:05:50.271 "seek_data": false, 00:05:50.271 "copy": true, 00:05:50.271 "nvme_iov_md": false 00:05:50.271 }, 00:05:50.271 "memory_domains": [ 00:05:50.271 { 00:05:50.271 "dma_device_id": "system", 00:05:50.271 "dma_device_type": 1 00:05:50.271 }, 00:05:50.271 { 00:05:50.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.271 "dma_device_type": 2 00:05:50.271 } 00:05:50.271 ], 00:05:50.271 "driver_specific": { 00:05:50.271 "passthru": { 00:05:50.271 "name": "Passthru0", 00:05:50.271 "base_bdev_name": "Malloc2" 00:05:50.271 } 00:05:50.271 } 00:05:50.271 } 00:05:50.271 ]' 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:50.271 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:50.272 ************************************ 00:05:50.272 END TEST rpc_daemon_integrity 00:05:50.272 ************************************ 00:05:50.272 04:01:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.272 00:05:50.272 real 0m0.324s 00:05:50.272 user 0m0.219s 00:05:50.272 sys 0m0.039s 00:05:50.272 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.272 04:01:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.272 04:01:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:50.272 04:01:43 rpc -- rpc/rpc.sh@84 -- # killprocess 72411 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@948 -- # '[' -z 72411 ']' 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@952 -- # kill -0 72411 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@953 -- # uname 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72411 00:05:50.272 killing process with pid 72411 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72411' 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@967 -- # kill 72411 00:05:50.272 04:01:43 rpc -- common/autotest_common.sh@972 -- # wait 72411 00:05:50.837 ************************************ 00:05:50.837 END TEST rpc 00:05:50.837 ************************************ 00:05:50.837 00:05:50.837 real 0m2.995s 00:05:50.837 user 0m3.911s 00:05:50.837 sys 0m0.656s 00:05:50.837 04:01:44 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.837 04:01:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.837 04:01:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.837 04:01:44 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:50.837 04:01:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.837 04:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.837 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:50.837 ************************************ 00:05:50.837 START TEST skip_rpc 00:05:50.837 ************************************ 00:05:50.837 04:01:44 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:51.096 * Looking for test storage... 00:05:51.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.096 04:01:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:51.096 04:01:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:51.096 04:01:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:51.096 04:01:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.096 04:01:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.096 04:01:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.096 ************************************ 00:05:51.096 START TEST skip_rpc 00:05:51.096 ************************************ 00:05:51.096 04:01:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:51.096 04:01:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=72608 00:05:51.096 04:01:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.096 04:01:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:51.096 04:01:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:51.096 [2024-07-23 04:01:44.298095] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:51.096 [2024-07-23 04:01:44.298193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72608 ] 00:05:51.096 [2024-07-23 04:01:44.419195] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.364 [2024-07-23 04:01:44.439243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.364 [2024-07-23 04:01:44.528019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.364 [2024-07-23 04:01:44.602872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 72608 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 72608 ']' 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 72608 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72608 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.641 killing process with pid 72608 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72608' 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 72608 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 72608 00:05:56.641 00:05:56.641 real 0m5.397s 00:05:56.641 user 0m4.948s 00:05:56.641 sys 0m0.353s 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.641 ************************************ 00:05:56.641 END TEST skip_rpc 00:05:56.641 ************************************ 00:05:56.641 04:01:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.641 04:01:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.641 04:01:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:56.641 04:01:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.641 04:01:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.641 04:01:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.641 ************************************ 00:05:56.641 START TEST skip_rpc_with_json 00:05:56.641 ************************************ 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=72696 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 72696 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 72696 ']' 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.641 04:01:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.641 [2024-07-23 04:01:49.742724] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:05:56.641 [2024-07-23 04:01:49.742836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72696 ] 00:05:56.641 [2024-07-23 04:01:49.864393] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.641 [2024-07-23 04:01:49.883808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.641 [2024-07-23 04:01:49.941853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.899 [2024-07-23 04:01:49.994960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.466 [2024-07-23 04:01:50.654203] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:57.466 request: 00:05:57.466 { 00:05:57.466 "trtype": "tcp", 00:05:57.466 "method": "nvmf_get_transports", 00:05:57.466 "req_id": 1 00:05:57.466 } 00:05:57.466 Got JSON-RPC error response 00:05:57.466 response: 00:05:57.466 { 00:05:57.466 "code": -19, 00:05:57.466 "message": "No such device" 00:05:57.466 } 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.466 [2024-07-23 04:01:50.666369] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.466 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.724 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.724 04:01:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.724 { 00:05:57.724 "subsystems": [ 00:05:57.724 { 00:05:57.724 "subsystem": "keyring", 00:05:57.724 "config": [] 00:05:57.724 }, 00:05:57.724 { 00:05:57.724 "subsystem": "iobuf", 00:05:57.724 "config": [ 00:05:57.724 { 00:05:57.724 "method": "iobuf_set_options", 00:05:57.724 "params": { 00:05:57.724 "small_pool_count": 8192, 00:05:57.724 "large_pool_count": 1024, 00:05:57.724 "small_bufsize": 8192, 00:05:57.724 "large_bufsize": 135168 00:05:57.724 } 00:05:57.724 } 00:05:57.724 ] 00:05:57.724 }, 00:05:57.724 { 00:05:57.724 "subsystem": "sock", 00:05:57.724 "config": [ 00:05:57.724 { 00:05:57.724 "method": "sock_set_default_impl", 00:05:57.724 "params": { 00:05:57.724 "impl_name": "uring" 00:05:57.724 } 00:05:57.724 }, 00:05:57.724 { 00:05:57.724 "method": "sock_impl_set_options", 00:05:57.724 "params": { 00:05:57.724 "impl_name": "ssl", 00:05:57.724 "recv_buf_size": 4096, 00:05:57.724 "send_buf_size": 4096, 00:05:57.724 "enable_recv_pipe": true, 00:05:57.724 "enable_quickack": false, 00:05:57.724 "enable_placement_id": 0, 00:05:57.724 "enable_zerocopy_send_server": true, 00:05:57.724 "enable_zerocopy_send_client": false, 00:05:57.724 "zerocopy_threshold": 0, 00:05:57.724 "tls_version": 0, 00:05:57.724 "enable_ktls": false 00:05:57.724 } 00:05:57.724 }, 00:05:57.724 { 00:05:57.724 "method": "sock_impl_set_options", 00:05:57.724 "params": { 00:05:57.724 "impl_name": "posix", 00:05:57.724 "recv_buf_size": 2097152, 00:05:57.724 "send_buf_size": 2097152, 00:05:57.724 "enable_recv_pipe": true, 00:05:57.724 "enable_quickack": false, 00:05:57.724 "enable_placement_id": 0, 00:05:57.724 "enable_zerocopy_send_server": true, 00:05:57.724 "enable_zerocopy_send_client": false, 00:05:57.724 "zerocopy_threshold": 0, 00:05:57.724 "tls_version": 0, 00:05:57.724 "enable_ktls": false 00:05:57.724 } 00:05:57.724 }, 00:05:57.724 { 00:05:57.724 "method": "sock_impl_set_options", 00:05:57.724 "params": { 00:05:57.724 "impl_name": "uring", 00:05:57.724 "recv_buf_size": 2097152, 00:05:57.724 "send_buf_size": 2097152, 00:05:57.724 "enable_recv_pipe": true, 00:05:57.724 "enable_quickack": false, 00:05:57.724 "enable_placement_id": 0, 00:05:57.724 "enable_zerocopy_send_server": false, 00:05:57.724 "enable_zerocopy_send_client": false, 00:05:57.724 "zerocopy_threshold": 0, 00:05:57.724 "tls_version": 0, 00:05:57.724 "enable_ktls": false 00:05:57.724 } 00:05:57.724 } 00:05:57.724 ] 00:05:57.724 }, 00:05:57.724 { 00:05:57.725 "subsystem": "vmd", 00:05:57.725 "config": [] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "accel", 00:05:57.725 "config": [ 00:05:57.725 { 00:05:57.725 "method": "accel_set_options", 00:05:57.725 "params": { 00:05:57.725 "small_cache_size": 128, 00:05:57.725 "large_cache_size": 16, 00:05:57.725 "task_count": 2048, 00:05:57.725 "sequence_count": 2048, 00:05:57.725 "buf_count": 2048 00:05:57.725 } 00:05:57.725 } 00:05:57.725 ] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "bdev", 00:05:57.725 "config": [ 00:05:57.725 { 00:05:57.725 "method": "bdev_set_options", 00:05:57.725 "params": { 00:05:57.725 "bdev_io_pool_size": 65535, 00:05:57.725 "bdev_io_cache_size": 256, 00:05:57.725 "bdev_auto_examine": true, 00:05:57.725 "iobuf_small_cache_size": 128, 00:05:57.725 "iobuf_large_cache_size": 16 00:05:57.725 } 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "method": "bdev_raid_set_options", 00:05:57.725 "params": { 00:05:57.725 "process_window_size_kb": 1024, 00:05:57.725 "process_max_bandwidth_mb_sec": 0 00:05:57.725 } 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "method": "bdev_iscsi_set_options", 00:05:57.725 "params": { 00:05:57.725 "timeout_sec": 30 00:05:57.725 } 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "method": "bdev_nvme_set_options", 00:05:57.725 "params": { 00:05:57.725 "action_on_timeout": "none", 00:05:57.725 "timeout_us": 0, 00:05:57.725 "timeout_admin_us": 0, 00:05:57.725 "keep_alive_timeout_ms": 10000, 00:05:57.725 "arbitration_burst": 0, 00:05:57.725 "low_priority_weight": 0, 00:05:57.725 "medium_priority_weight": 0, 00:05:57.725 "high_priority_weight": 0, 00:05:57.725 "nvme_adminq_poll_period_us": 10000, 00:05:57.725 "nvme_ioq_poll_period_us": 0, 00:05:57.725 "io_queue_requests": 0, 00:05:57.725 "delay_cmd_submit": true, 00:05:57.725 "transport_retry_count": 4, 00:05:57.725 "bdev_retry_count": 3, 00:05:57.725 "transport_ack_timeout": 0, 00:05:57.725 "ctrlr_loss_timeout_sec": 0, 00:05:57.725 "reconnect_delay_sec": 0, 00:05:57.725 "fast_io_fail_timeout_sec": 0, 00:05:57.725 "disable_auto_failback": false, 00:05:57.725 "generate_uuids": false, 00:05:57.725 "transport_tos": 0, 00:05:57.725 "nvme_error_stat": false, 00:05:57.725 "rdma_srq_size": 0, 00:05:57.725 "io_path_stat": false, 00:05:57.725 "allow_accel_sequence": false, 00:05:57.725 "rdma_max_cq_size": 0, 00:05:57.725 "rdma_cm_event_timeout_ms": 0, 00:05:57.725 "dhchap_digests": [ 00:05:57.725 "sha256", 00:05:57.725 "sha384", 00:05:57.725 "sha512" 00:05:57.725 ], 00:05:57.725 "dhchap_dhgroups": [ 00:05:57.725 "null", 00:05:57.725 "ffdhe2048", 00:05:57.725 "ffdhe3072", 00:05:57.725 "ffdhe4096", 00:05:57.725 "ffdhe6144", 00:05:57.725 "ffdhe8192" 00:05:57.725 ] 00:05:57.725 } 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "method": "bdev_nvme_set_hotplug", 00:05:57.725 "params": { 00:05:57.725 "period_us": 100000, 00:05:57.725 "enable": false 00:05:57.725 } 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "method": "bdev_wait_for_examine" 00:05:57.725 } 00:05:57.725 ] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "scsi", 00:05:57.725 "config": null 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "scheduler", 00:05:57.725 "config": [ 00:05:57.725 { 00:05:57.725 "method": "framework_set_scheduler", 00:05:57.725 "params": { 00:05:57.725 "name": "static" 00:05:57.725 } 00:05:57.725 } 00:05:57.725 ] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "vhost_scsi", 00:05:57.725 "config": [] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "vhost_blk", 00:05:57.725 "config": [] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "ublk", 00:05:57.725 "config": [] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "nbd", 00:05:57.725 "config": [] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "nvmf", 00:05:57.725 "config": [ 00:05:57.725 { 00:05:57.725 "method": "nvmf_set_config", 00:05:57.725 "params": { 00:05:57.725 "discovery_filter": "match_any", 00:05:57.725 "admin_cmd_passthru": { 00:05:57.725 "identify_ctrlr": false 00:05:57.725 } 00:05:57.725 } 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "method": "nvmf_set_max_subsystems", 00:05:57.725 "params": { 00:05:57.725 "max_subsystems": 1024 00:05:57.725 } 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "method": "nvmf_set_crdt", 00:05:57.725 "params": { 00:05:57.725 "crdt1": 0, 00:05:57.725 "crdt2": 0, 00:05:57.725 "crdt3": 0 00:05:57.725 } 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "method": "nvmf_create_transport", 00:05:57.725 "params": { 00:05:57.725 "trtype": "TCP", 00:05:57.725 "max_queue_depth": 128, 00:05:57.725 "max_io_qpairs_per_ctrlr": 127, 00:05:57.725 "in_capsule_data_size": 4096, 00:05:57.725 "max_io_size": 131072, 00:05:57.725 "io_unit_size": 131072, 00:05:57.725 "max_aq_depth": 128, 00:05:57.725 "num_shared_buffers": 511, 00:05:57.725 "buf_cache_size": 4294967295, 00:05:57.725 "dif_insert_or_strip": false, 00:05:57.725 "zcopy": false, 00:05:57.725 "c2h_success": true, 00:05:57.725 "sock_priority": 0, 00:05:57.725 "abort_timeout_sec": 1, 00:05:57.725 "ack_timeout": 0, 00:05:57.725 "data_wr_pool_size": 0 00:05:57.725 } 00:05:57.725 } 00:05:57.725 ] 00:05:57.725 }, 00:05:57.725 { 00:05:57.725 "subsystem": "iscsi", 00:05:57.725 "config": [ 00:05:57.725 { 00:05:57.725 "method": "iscsi_set_options", 00:05:57.725 "params": { 00:05:57.725 "node_base": "iqn.2016-06.io.spdk", 00:05:57.725 "max_sessions": 128, 00:05:57.725 "max_connections_per_session": 2, 00:05:57.725 "max_queue_depth": 64, 00:05:57.725 "default_time2wait": 2, 00:05:57.725 "default_time2retain": 20, 00:05:57.725 "first_burst_length": 8192, 00:05:57.725 "immediate_data": true, 00:05:57.725 "allow_duplicated_isid": false, 00:05:57.725 "error_recovery_level": 0, 00:05:57.725 "nop_timeout": 60, 00:05:57.725 "nop_in_interval": 30, 00:05:57.725 "disable_chap": false, 00:05:57.725 "require_chap": false, 00:05:57.725 "mutual_chap": false, 00:05:57.725 "chap_group": 0, 00:05:57.725 "max_large_datain_per_connection": 64, 00:05:57.725 "max_r2t_per_connection": 4, 00:05:57.725 "pdu_pool_size": 36864, 00:05:57.725 "immediate_data_pool_size": 16384, 00:05:57.725 "data_out_pool_size": 2048 00:05:57.725 } 00:05:57.725 } 00:05:57.725 ] 00:05:57.725 } 00:05:57.725 ] 00:05:57.725 } 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 72696 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 72696 ']' 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 72696 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72696 00:05:57.725 killing process with pid 72696 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72696' 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 72696 00:05:57.725 04:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 72696 00:05:57.983 04:01:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=72718 00:05:57.983 04:01:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.983 04:01:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 72718 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 72718 ']' 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 72718 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72718 00:06:03.273 killing process with pid 72718 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72718' 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 72718 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 72718 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.273 ************************************ 00:06:03.273 END TEST skip_rpc_with_json 00:06:03.273 ************************************ 00:06:03.273 00:06:03.273 real 0m6.925s 00:06:03.273 user 0m6.609s 00:06:03.273 sys 0m0.630s 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.273 04:01:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 04:01:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.533 04:01:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:03.533 04:01:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.533 04:01:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.533 04:01:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 ************************************ 00:06:03.533 START TEST skip_rpc_with_delay 00:06:03.533 ************************************ 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.533 [2024-07-23 04:01:56.723253] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:03.533 [2024-07-23 04:01:56.723395] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.533 ************************************ 00:06:03.533 END TEST skip_rpc_with_delay 00:06:03.533 ************************************ 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.533 00:06:03.533 real 0m0.087s 00:06:03.533 user 0m0.050s 00:06:03.533 sys 0m0.034s 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.533 04:01:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 04:01:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.533 04:01:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:03.533 04:01:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:03.533 04:01:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:03.533 04:01:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.533 04:01:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.533 04:01:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 ************************************ 00:06:03.533 START TEST exit_on_failed_rpc_init 00:06:03.533 ************************************ 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=72833 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 72833 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 72833 ']' 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.533 04:01:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.533 [2024-07-23 04:01:56.856160] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:03.533 [2024-07-23 04:01:56.856768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72833 ] 00:06:03.792 [2024-07-23 04:01:56.977845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.792 [2024-07-23 04:01:56.996979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.792 [2024-07-23 04:01:57.059345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.792 [2024-07-23 04:01:57.114831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.727 04:01:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.727 [2024-07-23 04:01:57.898515] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:04.727 [2024-07-23 04:01:57.898853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72851 ] 00:06:04.727 [2024-07-23 04:01:58.020759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.727 [2024-07-23 04:01:58.042015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.985 [2024-07-23 04:01:58.122568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.985 [2024-07-23 04:01:58.122676] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:04.985 [2024-07-23 04:01:58.122694] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:04.985 [2024-07-23 04:01:58.122705] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 72833 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 72833 ']' 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 72833 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72833 00:06:04.985 killing process with pid 72833 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72833' 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 72833 00:06:04.985 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 72833 00:06:05.553 00:06:05.553 real 0m1.849s 00:06:05.553 user 0m2.137s 00:06:05.553 sys 0m0.426s 00:06:05.553 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.553 04:01:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.553 ************************************ 00:06:05.553 END TEST exit_on_failed_rpc_init 00:06:05.553 ************************************ 00:06:05.553 04:01:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:05.553 04:01:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.553 ************************************ 00:06:05.553 END TEST skip_rpc 00:06:05.553 ************************************ 00:06:05.553 00:06:05.553 real 0m14.550s 00:06:05.553 user 0m13.842s 00:06:05.553 sys 0m1.617s 00:06:05.553 04:01:58 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.553 04:01:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.553 04:01:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.553 04:01:58 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.553 04:01:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.553 04:01:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.553 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:05.553 ************************************ 00:06:05.553 START TEST rpc_client 00:06:05.553 ************************************ 00:06:05.553 04:01:58 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.553 * Looking for test storage... 00:06:05.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:05.553 04:01:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:05.553 OK 00:06:05.553 04:01:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:05.553 00:06:05.553 real 0m0.097s 00:06:05.553 user 0m0.047s 00:06:05.553 sys 0m0.055s 00:06:05.553 04:01:58 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.553 ************************************ 00:06:05.553 END TEST rpc_client 00:06:05.553 ************************************ 00:06:05.553 04:01:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:05.553 04:01:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.553 04:01:58 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.553 04:01:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.553 04:01:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.553 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:05.553 ************************************ 00:06:05.553 START TEST json_config 00:06:05.553 ************************************ 00:06:05.553 04:01:58 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.812 04:01:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.812 04:01:58 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.812 04:01:58 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.812 04:01:58 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.812 04:01:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.812 04:01:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.812 04:01:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.812 04:01:58 json_config -- paths/export.sh@5 -- # export PATH 00:06:05.812 04:01:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@47 -- # : 0 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.812 04:01:58 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.812 04:01:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:05.812 04:01:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:05.812 04:01:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:05.812 04:01:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:05.813 INFO: JSON configuration test init 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.813 Waiting for target to run... 00:06:05.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.813 04:01:58 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:05.813 04:01:58 json_config -- json_config/common.sh@9 -- # local app=target 00:06:05.813 04:01:58 json_config -- json_config/common.sh@10 -- # shift 00:06:05.813 04:01:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.813 04:01:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.813 04:01:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.813 04:01:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.813 04:01:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.813 04:01:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72969 00:06:05.813 04:01:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.813 04:01:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:05.813 04:01:58 json_config -- json_config/common.sh@25 -- # waitforlisten 72969 /var/tmp/spdk_tgt.sock 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@829 -- # '[' -z 72969 ']' 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.813 04:01:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.813 [2024-07-23 04:01:59.027184] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:05.813 [2024-07-23 04:01:59.027311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72969 ] 00:06:06.379 [2024-07-23 04:01:59.423001] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.379 [2024-07-23 04:01:59.440036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.379 [2024-07-23 04:01:59.508266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.638 00:06:06.638 04:01:59 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.638 04:01:59 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:06.638 04:01:59 json_config -- json_config/common.sh@26 -- # echo '' 00:06:06.638 04:01:59 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:06.638 04:01:59 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:06.638 04:01:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.638 04:01:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.897 04:01:59 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:06.897 04:01:59 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:06.898 04:01:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.898 04:01:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.898 04:02:00 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:06.898 04:02:00 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:06.898 04:02:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:06.898 [2024-07-23 04:02:00.236251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.157 04:02:00 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:07.157 04:02:00 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:07.157 04:02:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.157 04:02:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.157 04:02:00 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:07.157 04:02:00 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:07.157 04:02:00 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:07.157 04:02:00 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:07.157 04:02:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:07.157 04:02:00 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@51 -- # sort 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:07.415 04:02:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.415 04:02:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:07.415 04:02:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.415 04:02:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:07.415 04:02:00 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.415 04:02:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.686 MallocForNvmf0 00:06:07.686 04:02:00 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.686 04:02:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.969 MallocForNvmf1 00:06:07.969 04:02:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:07.969 04:02:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.228 [2024-07-23 04:02:01.339603] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.228 04:02:01 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.228 04:02:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.485 04:02:01 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.485 04:02:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.485 04:02:01 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.485 04:02:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.743 04:02:02 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.743 04:02:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.001 [2024-07-23 04:02:02.196035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.002 04:02:02 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:09.002 04:02:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.002 04:02:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.002 04:02:02 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:09.002 04:02:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.002 04:02:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.002 04:02:02 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:09.002 04:02:02 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.002 04:02:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.259 MallocBdevForConfigChangeCheck 00:06:09.259 04:02:02 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:09.259 04:02:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.259 04:02:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.259 04:02:02 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:09.259 04:02:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.824 INFO: shutting down applications... 00:06:09.824 04:02:02 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:09.824 04:02:02 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:09.824 04:02:02 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:09.824 04:02:02 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:09.824 04:02:02 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:10.085 Calling clear_iscsi_subsystem 00:06:10.085 Calling clear_nvmf_subsystem 00:06:10.085 Calling clear_nbd_subsystem 00:06:10.085 Calling clear_ublk_subsystem 00:06:10.085 Calling clear_vhost_blk_subsystem 00:06:10.085 Calling clear_vhost_scsi_subsystem 00:06:10.085 Calling clear_bdev_subsystem 00:06:10.085 04:02:03 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:10.085 04:02:03 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:10.085 04:02:03 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:10.085 04:02:03 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.085 04:02:03 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:10.085 04:02:03 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:10.350 04:02:03 json_config -- json_config/json_config.sh@349 -- # break 00:06:10.350 04:02:03 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:10.350 04:02:03 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:10.350 04:02:03 json_config -- json_config/common.sh@31 -- # local app=target 00:06:10.350 04:02:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:10.350 04:02:03 json_config -- json_config/common.sh@35 -- # [[ -n 72969 ]] 00:06:10.350 04:02:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 72969 00:06:10.350 04:02:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:10.350 04:02:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.350 04:02:03 json_config -- json_config/common.sh@41 -- # kill -0 72969 00:06:10.350 04:02:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.917 04:02:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.917 04:02:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.917 04:02:04 json_config -- json_config/common.sh@41 -- # kill -0 72969 00:06:10.917 04:02:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.917 SPDK target shutdown done 00:06:10.917 04:02:04 json_config -- json_config/common.sh@43 -- # break 00:06:10.917 04:02:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.917 04:02:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.917 INFO: relaunching applications... 00:06:10.917 04:02:04 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:10.917 04:02:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.917 04:02:04 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.917 04:02:04 json_config -- json_config/common.sh@10 -- # shift 00:06:10.917 04:02:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.917 04:02:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.917 04:02:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.917 04:02:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.917 04:02:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.917 04:02:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73159 00:06:10.917 04:02:04 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.917 04:02:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.917 Waiting for target to run... 00:06:10.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.917 04:02:04 json_config -- json_config/common.sh@25 -- # waitforlisten 73159 /var/tmp/spdk_tgt.sock 00:06:10.917 04:02:04 json_config -- common/autotest_common.sh@829 -- # '[' -z 73159 ']' 00:06:10.917 04:02:04 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.917 04:02:04 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.917 04:02:04 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.917 04:02:04 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.917 04:02:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.917 [2024-07-23 04:02:04.180752] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:10.917 [2024-07-23 04:02:04.181247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73159 ] 00:06:11.482 [2024-07-23 04:02:04.594117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.483 [2024-07-23 04:02:04.612063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.483 [2024-07-23 04:02:04.677486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.483 [2024-07-23 04:02:04.803864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.741 [2024-07-23 04:02:05.004601] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.741 [2024-07-23 04:02:05.036658] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:11.999 00:06:11.999 INFO: Checking if target configuration is the same... 00:06:11.999 04:02:05 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.999 04:02:05 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:11.999 04:02:05 json_config -- json_config/common.sh@26 -- # echo '' 00:06:11.999 04:02:05 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:11.999 04:02:05 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:11.999 04:02:05 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.999 04:02:05 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:11.999 04:02:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.999 + '[' 2 -ne 2 ']' 00:06:11.999 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:11.999 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:11.999 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:11.999 +++ basename /dev/fd/62 00:06:11.999 ++ mktemp /tmp/62.XXX 00:06:11.999 + tmp_file_1=/tmp/62.6No 00:06:11.999 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.999 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.999 + tmp_file_2=/tmp/spdk_tgt_config.json.PIA 00:06:11.999 + ret=0 00:06:11.999 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.257 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:12.257 + diff -u /tmp/62.6No /tmp/spdk_tgt_config.json.PIA 00:06:12.257 INFO: JSON config files are the same 00:06:12.257 + echo 'INFO: JSON config files are the same' 00:06:12.257 + rm /tmp/62.6No /tmp/spdk_tgt_config.json.PIA 00:06:12.257 + exit 0 00:06:12.257 INFO: changing configuration and checking if this can be detected... 00:06:12.257 04:02:05 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:12.257 04:02:05 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:12.257 04:02:05 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.257 04:02:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.529 04:02:05 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.530 04:02:05 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:12.530 04:02:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.530 + '[' 2 -ne 2 ']' 00:06:12.530 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:12.530 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:12.530 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:12.530 +++ basename /dev/fd/62 00:06:12.530 ++ mktemp /tmp/62.XXX 00:06:12.530 + tmp_file_1=/tmp/62.OAw 00:06:12.530 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.530 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.530 + tmp_file_2=/tmp/spdk_tgt_config.json.2pI 00:06:12.530 + ret=0 00:06:12.530 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.098 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.098 + diff -u /tmp/62.OAw /tmp/spdk_tgt_config.json.2pI 00:06:13.098 + ret=1 00:06:13.098 + echo '=== Start of file: /tmp/62.OAw ===' 00:06:13.098 + cat /tmp/62.OAw 00:06:13.098 + echo '=== End of file: /tmp/62.OAw ===' 00:06:13.098 + echo '' 00:06:13.098 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2pI ===' 00:06:13.098 + cat /tmp/spdk_tgt_config.json.2pI 00:06:13.098 + echo '=== End of file: /tmp/spdk_tgt_config.json.2pI ===' 00:06:13.098 + echo '' 00:06:13.098 + rm /tmp/62.OAw /tmp/spdk_tgt_config.json.2pI 00:06:13.098 + exit 1 00:06:13.098 INFO: configuration change detected. 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@321 -- # [[ -n 73159 ]] 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.098 04:02:06 json_config -- json_config/json_config.sh@327 -- # killprocess 73159 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@948 -- # '[' -z 73159 ']' 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@952 -- # kill -0 73159 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@953 -- # uname 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73159 00:06:13.098 killing process with pid 73159 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73159' 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@967 -- # kill 73159 00:06:13.098 04:02:06 json_config -- common/autotest_common.sh@972 -- # wait 73159 00:06:13.356 04:02:06 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:13.356 04:02:06 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:13.356 04:02:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.356 04:02:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 INFO: Success 00:06:13.356 04:02:06 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:13.356 04:02:06 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:13.356 ************************************ 00:06:13.356 END TEST json_config 00:06:13.356 ************************************ 00:06:13.356 00:06:13.356 real 0m7.695s 00:06:13.356 user 0m10.740s 00:06:13.356 sys 0m1.641s 00:06:13.356 04:02:06 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.356 04:02:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 04:02:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.356 04:02:06 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.356 04:02:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.356 04:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.356 04:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 ************************************ 00:06:13.356 START TEST json_config_extra_key 00:06:13.356 ************************************ 00:06:13.356 04:02:06 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.356 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.356 04:02:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:13.356 04:02:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.357 04:02:06 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.357 04:02:06 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.357 04:02:06 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.357 04:02:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.357 04:02:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.357 04:02:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.357 04:02:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:13.357 04:02:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.357 04:02:06 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:13.615 INFO: launching applications... 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:13.615 04:02:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.615 Waiting for target to run... 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=73294 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 73294 /var/tmp/spdk_tgt.sock 00:06:13.615 04:02:06 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 73294 ']' 00:06:13.615 04:02:06 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.615 04:02:06 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.615 04:02:06 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.615 04:02:06 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.615 04:02:06 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.615 04:02:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.615 [2024-07-23 04:02:06.767254] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:13.615 [2024-07-23 04:02:06.767349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73294 ] 00:06:13.873 [2024-07-23 04:02:07.174309] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.873 [2024-07-23 04:02:07.194317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.132 [2024-07-23 04:02:07.260994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.132 [2024-07-23 04:02:07.281124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.760 04:02:07 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.760 00:06:14.760 INFO: shutting down applications... 00:06:14.760 04:02:07 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:14.760 04:02:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:14.760 04:02:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 73294 ]] 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 73294 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73294 00:06:14.760 04:02:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.017 04:02:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.017 04:02:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.017 04:02:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73294 00:06:15.017 04:02:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.017 04:02:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.017 04:02:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.017 04:02:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.017 SPDK target shutdown done 00:06:15.017 04:02:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.017 Success 00:06:15.017 00:06:15.017 real 0m1.642s 00:06:15.017 user 0m1.538s 00:06:15.017 sys 0m0.435s 00:06:15.017 ************************************ 00:06:15.017 END TEST json_config_extra_key 00:06:15.017 ************************************ 00:06:15.017 04:02:08 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.017 04:02:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.017 04:02:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.017 04:02:08 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.017 04:02:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.017 04:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.017 04:02:08 -- common/autotest_common.sh@10 -- # set +x 00:06:15.017 ************************************ 00:06:15.017 START TEST alias_rpc 00:06:15.017 ************************************ 00:06:15.017 04:02:08 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.274 * Looking for test storage... 00:06:15.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:15.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.274 04:02:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.274 04:02:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=73364 00:06:15.274 04:02:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 73364 00:06:15.274 04:02:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.274 04:02:08 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 73364 ']' 00:06:15.274 04:02:08 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.274 04:02:08 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.274 04:02:08 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.274 04:02:08 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.274 04:02:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.274 [2024-07-23 04:02:08.456313] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:15.274 [2024-07-23 04:02:08.456414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73364 ] 00:06:15.274 [2024-07-23 04:02:08.573108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.274 [2024-07-23 04:02:08.588675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.532 [2024-07-23 04:02:08.663541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.532 [2024-07-23 04:02:08.716501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.791 04:02:08 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.791 04:02:08 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.791 04:02:08 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:16.049 04:02:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 73364 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 73364 ']' 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 73364 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73364 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.049 killing process with pid 73364 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73364' 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@967 -- # kill 73364 00:06:16.049 04:02:09 alias_rpc -- common/autotest_common.sh@972 -- # wait 73364 00:06:16.307 ************************************ 00:06:16.307 END TEST alias_rpc 00:06:16.307 ************************************ 00:06:16.307 00:06:16.307 real 0m1.262s 00:06:16.307 user 0m1.335s 00:06:16.307 sys 0m0.383s 00:06:16.307 04:02:09 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.307 04:02:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.307 04:02:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.307 04:02:09 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:16.307 04:02:09 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:16.307 04:02:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.307 04:02:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.307 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:16.307 ************************************ 00:06:16.307 START TEST spdkcli_tcp 00:06:16.307 ************************************ 00:06:16.307 04:02:09 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:16.566 * Looking for test storage... 00:06:16.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.566 04:02:09 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.566 04:02:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=73427 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 73427 00:06:16.566 04:02:09 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 73427 ']' 00:06:16.566 04:02:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.566 04:02:09 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.566 04:02:09 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.566 04:02:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.566 04:02:09 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.566 04:02:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.566 [2024-07-23 04:02:09.779398] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:16.566 [2024-07-23 04:02:09.779498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73427 ] 00:06:16.566 [2024-07-23 04:02:09.902004] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.824 [2024-07-23 04:02:09.919182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.824 [2024-07-23 04:02:10.013721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.824 [2024-07-23 04:02:10.013731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.824 [2024-07-23 04:02:10.071581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.389 04:02:10 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.389 04:02:10 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:17.389 04:02:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=73444 00:06:17.390 04:02:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.390 04:02:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.648 [ 00:06:17.648 "bdev_malloc_delete", 00:06:17.648 "bdev_malloc_create", 00:06:17.648 "bdev_null_resize", 00:06:17.648 "bdev_null_delete", 00:06:17.648 "bdev_null_create", 00:06:17.648 "bdev_nvme_cuse_unregister", 00:06:17.648 "bdev_nvme_cuse_register", 00:06:17.648 "bdev_opal_new_user", 00:06:17.648 "bdev_opal_set_lock_state", 00:06:17.648 "bdev_opal_delete", 00:06:17.648 "bdev_opal_get_info", 00:06:17.648 "bdev_opal_create", 00:06:17.648 "bdev_nvme_opal_revert", 00:06:17.648 "bdev_nvme_opal_init", 00:06:17.648 "bdev_nvme_send_cmd", 00:06:17.648 "bdev_nvme_get_path_iostat", 00:06:17.648 "bdev_nvme_get_mdns_discovery_info", 00:06:17.648 "bdev_nvme_stop_mdns_discovery", 00:06:17.648 "bdev_nvme_start_mdns_discovery", 00:06:17.648 "bdev_nvme_set_multipath_policy", 00:06:17.648 "bdev_nvme_set_preferred_path", 00:06:17.648 "bdev_nvme_get_io_paths", 00:06:17.648 "bdev_nvme_remove_error_injection", 00:06:17.648 "bdev_nvme_add_error_injection", 00:06:17.648 "bdev_nvme_get_discovery_info", 00:06:17.648 "bdev_nvme_stop_discovery", 00:06:17.648 "bdev_nvme_start_discovery", 00:06:17.648 "bdev_nvme_get_controller_health_info", 00:06:17.648 "bdev_nvme_disable_controller", 00:06:17.648 "bdev_nvme_enable_controller", 00:06:17.648 "bdev_nvme_reset_controller", 00:06:17.648 "bdev_nvme_get_transport_statistics", 00:06:17.648 "bdev_nvme_apply_firmware", 00:06:17.648 "bdev_nvme_detach_controller", 00:06:17.648 "bdev_nvme_get_controllers", 00:06:17.648 "bdev_nvme_attach_controller", 00:06:17.648 "bdev_nvme_set_hotplug", 00:06:17.648 "bdev_nvme_set_options", 00:06:17.648 "bdev_passthru_delete", 00:06:17.648 "bdev_passthru_create", 00:06:17.648 "bdev_lvol_set_parent_bdev", 00:06:17.648 "bdev_lvol_set_parent", 00:06:17.648 "bdev_lvol_check_shallow_copy", 00:06:17.648 "bdev_lvol_start_shallow_copy", 00:06:17.648 "bdev_lvol_grow_lvstore", 00:06:17.649 "bdev_lvol_get_lvols", 00:06:17.649 "bdev_lvol_get_lvstores", 00:06:17.649 "bdev_lvol_delete", 00:06:17.649 "bdev_lvol_set_read_only", 00:06:17.649 "bdev_lvol_resize", 00:06:17.649 "bdev_lvol_decouple_parent", 00:06:17.649 "bdev_lvol_inflate", 00:06:17.649 "bdev_lvol_rename", 00:06:17.649 "bdev_lvol_clone_bdev", 00:06:17.649 "bdev_lvol_clone", 00:06:17.649 "bdev_lvol_snapshot", 00:06:17.649 "bdev_lvol_create", 00:06:17.649 "bdev_lvol_delete_lvstore", 00:06:17.649 "bdev_lvol_rename_lvstore", 00:06:17.649 "bdev_lvol_create_lvstore", 00:06:17.649 "bdev_raid_set_options", 00:06:17.649 "bdev_raid_remove_base_bdev", 00:06:17.649 "bdev_raid_add_base_bdev", 00:06:17.649 "bdev_raid_delete", 00:06:17.649 "bdev_raid_create", 00:06:17.649 "bdev_raid_get_bdevs", 00:06:17.649 "bdev_error_inject_error", 00:06:17.649 "bdev_error_delete", 00:06:17.649 "bdev_error_create", 00:06:17.649 "bdev_split_delete", 00:06:17.649 "bdev_split_create", 00:06:17.649 "bdev_delay_delete", 00:06:17.649 "bdev_delay_create", 00:06:17.649 "bdev_delay_update_latency", 00:06:17.649 "bdev_zone_block_delete", 00:06:17.649 "bdev_zone_block_create", 00:06:17.649 "blobfs_create", 00:06:17.649 "blobfs_detect", 00:06:17.649 "blobfs_set_cache_size", 00:06:17.649 "bdev_aio_delete", 00:06:17.649 "bdev_aio_rescan", 00:06:17.649 "bdev_aio_create", 00:06:17.649 "bdev_ftl_set_property", 00:06:17.649 "bdev_ftl_get_properties", 00:06:17.649 "bdev_ftl_get_stats", 00:06:17.649 "bdev_ftl_unmap", 00:06:17.649 "bdev_ftl_unload", 00:06:17.649 "bdev_ftl_delete", 00:06:17.649 "bdev_ftl_load", 00:06:17.649 "bdev_ftl_create", 00:06:17.649 "bdev_virtio_attach_controller", 00:06:17.649 "bdev_virtio_scsi_get_devices", 00:06:17.649 "bdev_virtio_detach_controller", 00:06:17.649 "bdev_virtio_blk_set_hotplug", 00:06:17.649 "bdev_iscsi_delete", 00:06:17.649 "bdev_iscsi_create", 00:06:17.649 "bdev_iscsi_set_options", 00:06:17.649 "bdev_uring_delete", 00:06:17.649 "bdev_uring_rescan", 00:06:17.649 "bdev_uring_create", 00:06:17.649 "accel_error_inject_error", 00:06:17.649 "ioat_scan_accel_module", 00:06:17.649 "dsa_scan_accel_module", 00:06:17.649 "iaa_scan_accel_module", 00:06:17.649 "keyring_file_remove_key", 00:06:17.649 "keyring_file_add_key", 00:06:17.649 "keyring_linux_set_options", 00:06:17.649 "iscsi_get_histogram", 00:06:17.649 "iscsi_enable_histogram", 00:06:17.649 "iscsi_set_options", 00:06:17.649 "iscsi_get_auth_groups", 00:06:17.649 "iscsi_auth_group_remove_secret", 00:06:17.649 "iscsi_auth_group_add_secret", 00:06:17.649 "iscsi_delete_auth_group", 00:06:17.649 "iscsi_create_auth_group", 00:06:17.649 "iscsi_set_discovery_auth", 00:06:17.649 "iscsi_get_options", 00:06:17.649 "iscsi_target_node_request_logout", 00:06:17.649 "iscsi_target_node_set_redirect", 00:06:17.649 "iscsi_target_node_set_auth", 00:06:17.649 "iscsi_target_node_add_lun", 00:06:17.649 "iscsi_get_stats", 00:06:17.649 "iscsi_get_connections", 00:06:17.649 "iscsi_portal_group_set_auth", 00:06:17.649 "iscsi_start_portal_group", 00:06:17.649 "iscsi_delete_portal_group", 00:06:17.649 "iscsi_create_portal_group", 00:06:17.649 "iscsi_get_portal_groups", 00:06:17.649 "iscsi_delete_target_node", 00:06:17.649 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.649 "iscsi_target_node_add_pg_ig_maps", 00:06:17.649 "iscsi_create_target_node", 00:06:17.649 "iscsi_get_target_nodes", 00:06:17.649 "iscsi_delete_initiator_group", 00:06:17.649 "iscsi_initiator_group_remove_initiators", 00:06:17.649 "iscsi_initiator_group_add_initiators", 00:06:17.649 "iscsi_create_initiator_group", 00:06:17.649 "iscsi_get_initiator_groups", 00:06:17.649 "nvmf_set_crdt", 00:06:17.649 "nvmf_set_config", 00:06:17.649 "nvmf_set_max_subsystems", 00:06:17.649 "nvmf_stop_mdns_prr", 00:06:17.649 "nvmf_publish_mdns_prr", 00:06:17.649 "nvmf_subsystem_get_listeners", 00:06:17.649 "nvmf_subsystem_get_qpairs", 00:06:17.649 "nvmf_subsystem_get_controllers", 00:06:17.649 "nvmf_get_stats", 00:06:17.649 "nvmf_get_transports", 00:06:17.649 "nvmf_create_transport", 00:06:17.649 "nvmf_get_targets", 00:06:17.649 "nvmf_delete_target", 00:06:17.649 "nvmf_create_target", 00:06:17.649 "nvmf_subsystem_allow_any_host", 00:06:17.649 "nvmf_subsystem_remove_host", 00:06:17.649 "nvmf_subsystem_add_host", 00:06:17.649 "nvmf_ns_remove_host", 00:06:17.649 "nvmf_ns_add_host", 00:06:17.649 "nvmf_subsystem_remove_ns", 00:06:17.649 "nvmf_subsystem_add_ns", 00:06:17.649 "nvmf_subsystem_listener_set_ana_state", 00:06:17.649 "nvmf_discovery_get_referrals", 00:06:17.649 "nvmf_discovery_remove_referral", 00:06:17.649 "nvmf_discovery_add_referral", 00:06:17.649 "nvmf_subsystem_remove_listener", 00:06:17.649 "nvmf_subsystem_add_listener", 00:06:17.649 "nvmf_delete_subsystem", 00:06:17.649 "nvmf_create_subsystem", 00:06:17.649 "nvmf_get_subsystems", 00:06:17.649 "env_dpdk_get_mem_stats", 00:06:17.649 "nbd_get_disks", 00:06:17.649 "nbd_stop_disk", 00:06:17.649 "nbd_start_disk", 00:06:17.649 "ublk_recover_disk", 00:06:17.649 "ublk_get_disks", 00:06:17.649 "ublk_stop_disk", 00:06:17.649 "ublk_start_disk", 00:06:17.649 "ublk_destroy_target", 00:06:17.649 "ublk_create_target", 00:06:17.649 "virtio_blk_create_transport", 00:06:17.649 "virtio_blk_get_transports", 00:06:17.649 "vhost_controller_set_coalescing", 00:06:17.649 "vhost_get_controllers", 00:06:17.649 "vhost_delete_controller", 00:06:17.649 "vhost_create_blk_controller", 00:06:17.649 "vhost_scsi_controller_remove_target", 00:06:17.649 "vhost_scsi_controller_add_target", 00:06:17.649 "vhost_start_scsi_controller", 00:06:17.649 "vhost_create_scsi_controller", 00:06:17.649 "thread_set_cpumask", 00:06:17.649 "framework_get_governor", 00:06:17.649 "framework_get_scheduler", 00:06:17.649 "framework_set_scheduler", 00:06:17.649 "framework_get_reactors", 00:06:17.649 "thread_get_io_channels", 00:06:17.649 "thread_get_pollers", 00:06:17.649 "thread_get_stats", 00:06:17.649 "framework_monitor_context_switch", 00:06:17.649 "spdk_kill_instance", 00:06:17.649 "log_enable_timestamps", 00:06:17.649 "log_get_flags", 00:06:17.649 "log_clear_flag", 00:06:17.649 "log_set_flag", 00:06:17.649 "log_get_level", 00:06:17.649 "log_set_level", 00:06:17.649 "log_get_print_level", 00:06:17.649 "log_set_print_level", 00:06:17.649 "framework_enable_cpumask_locks", 00:06:17.649 "framework_disable_cpumask_locks", 00:06:17.649 "framework_wait_init", 00:06:17.649 "framework_start_init", 00:06:17.649 "scsi_get_devices", 00:06:17.649 "bdev_get_histogram", 00:06:17.649 "bdev_enable_histogram", 00:06:17.649 "bdev_set_qos_limit", 00:06:17.649 "bdev_set_qd_sampling_period", 00:06:17.649 "bdev_get_bdevs", 00:06:17.649 "bdev_reset_iostat", 00:06:17.649 "bdev_get_iostat", 00:06:17.649 "bdev_examine", 00:06:17.649 "bdev_wait_for_examine", 00:06:17.649 "bdev_set_options", 00:06:17.649 "notify_get_notifications", 00:06:17.649 "notify_get_types", 00:06:17.649 "accel_get_stats", 00:06:17.649 "accel_set_options", 00:06:17.649 "accel_set_driver", 00:06:17.649 "accel_crypto_key_destroy", 00:06:17.649 "accel_crypto_keys_get", 00:06:17.649 "accel_crypto_key_create", 00:06:17.649 "accel_assign_opc", 00:06:17.649 "accel_get_module_info", 00:06:17.649 "accel_get_opc_assignments", 00:06:17.649 "vmd_rescan", 00:06:17.649 "vmd_remove_device", 00:06:17.649 "vmd_enable", 00:06:17.649 "sock_get_default_impl", 00:06:17.649 "sock_set_default_impl", 00:06:17.649 "sock_impl_set_options", 00:06:17.649 "sock_impl_get_options", 00:06:17.649 "iobuf_get_stats", 00:06:17.649 "iobuf_set_options", 00:06:17.649 "framework_get_pci_devices", 00:06:17.649 "framework_get_config", 00:06:17.649 "framework_get_subsystems", 00:06:17.649 "trace_get_info", 00:06:17.649 "trace_get_tpoint_group_mask", 00:06:17.649 "trace_disable_tpoint_group", 00:06:17.649 "trace_enable_tpoint_group", 00:06:17.649 "trace_clear_tpoint_mask", 00:06:17.649 "trace_set_tpoint_mask", 00:06:17.649 "keyring_get_keys", 00:06:17.649 "spdk_get_version", 00:06:17.649 "rpc_get_methods" 00:06:17.649 ] 00:06:17.649 04:02:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.649 04:02:10 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.649 04:02:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.918 04:02:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.918 04:02:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 73427 00:06:17.918 04:02:11 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 73427 ']' 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 73427 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73427 00:06:17.919 killing process with pid 73427 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73427' 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 73427 00:06:17.919 04:02:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 73427 00:06:18.179 ************************************ 00:06:18.179 END TEST spdkcli_tcp 00:06:18.179 ************************************ 00:06:18.179 00:06:18.179 real 0m1.757s 00:06:18.179 user 0m3.269s 00:06:18.179 sys 0m0.471s 00:06:18.179 04:02:11 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.179 04:02:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.179 04:02:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.179 04:02:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.179 04:02:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.179 04:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.179 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:18.179 ************************************ 00:06:18.179 START TEST dpdk_mem_utility 00:06:18.179 ************************************ 00:06:18.179 04:02:11 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.179 * Looking for test storage... 00:06:18.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:18.179 04:02:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.179 04:02:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=73518 00:06:18.179 04:02:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 73518 00:06:18.179 04:02:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.179 04:02:11 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 73518 ']' 00:06:18.179 04:02:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.179 04:02:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.179 04:02:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.179 04:02:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.179 04:02:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.437 [2024-07-23 04:02:11.576194] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:18.437 [2024-07-23 04:02:11.576297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73518 ] 00:06:18.437 [2024-07-23 04:02:11.697818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.437 [2024-07-23 04:02:11.715746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.695 [2024-07-23 04:02:11.786892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.695 [2024-07-23 04:02:11.839999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.262 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.262 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:19.262 04:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.262 04:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.262 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.262 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.523 { 00:06:19.523 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.523 } 00:06:19.523 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.523 04:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:19.523 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:19.523 1 heaps totaling size 814.000000 MiB 00:06:19.523 size: 814.000000 MiB heap id: 0 00:06:19.523 end heaps---------- 00:06:19.523 8 mempools totaling size 598.116089 MiB 00:06:19.523 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.523 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.523 size: 84.521057 MiB name: bdev_io_73518 00:06:19.523 size: 51.011292 MiB name: evtpool_73518 00:06:19.523 size: 50.003479 MiB name: msgpool_73518 00:06:19.523 size: 21.763794 MiB name: PDU_Pool 00:06:19.523 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.523 size: 0.026123 MiB name: Session_Pool 00:06:19.523 end mempools------- 00:06:19.523 6 memzones totaling size 4.142822 MiB 00:06:19.523 size: 1.000366 MiB name: RG_ring_0_73518 00:06:19.523 size: 1.000366 MiB name: RG_ring_1_73518 00:06:19.523 size: 1.000366 MiB name: RG_ring_4_73518 00:06:19.523 size: 1.000366 MiB name: RG_ring_5_73518 00:06:19.523 size: 0.125366 MiB name: RG_ring_2_73518 00:06:19.523 size: 0.015991 MiB name: RG_ring_3_73518 00:06:19.523 end memzones------- 00:06:19.523 04:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.523 heap id: 0 total size: 814.000000 MiB number of busy elements: 299 number of free elements: 15 00:06:19.523 list of free elements. size: 12.472107 MiB 00:06:19.523 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:19.523 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:19.523 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:19.523 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:19.523 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:19.523 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:19.523 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:19.523 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:19.523 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:19.523 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:06:19.523 element at address: 0x20000b200000 with size: 0.489624 MiB 00:06:19.523 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:19.523 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:19.523 element at address: 0x200027e00000 with size: 0.396118 MiB 00:06:19.523 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:19.523 list of standard malloc elements. size: 199.265320 MiB 00:06:19.523 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:19.523 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:19.523 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:19.523 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:19.523 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:19.523 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:19.523 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:19.523 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:19.523 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:19.523 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:19.523 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:19.524 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e65680 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e65740 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6c340 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:19.524 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:19.525 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:19.525 list of memzone associated elements. size: 602.262573 MiB 00:06:19.525 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:19.525 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.525 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:19.525 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.525 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:19.525 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_73518_0 00:06:19.525 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:19.525 associated memzone info: size: 48.002930 MiB name: MP_evtpool_73518_0 00:06:19.525 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:19.525 associated memzone info: size: 48.002930 MiB name: MP_msgpool_73518_0 00:06:19.525 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:19.525 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.525 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:19.525 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.525 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:19.525 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_73518 00:06:19.525 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:19.525 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_73518 00:06:19.525 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:19.525 associated memzone info: size: 1.007996 MiB name: MP_evtpool_73518 00:06:19.525 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:19.525 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.525 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:19.525 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.525 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:19.525 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.525 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:19.525 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.525 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:19.525 associated memzone info: size: 1.000366 MiB name: RG_ring_0_73518 00:06:19.525 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:19.525 associated memzone info: size: 1.000366 MiB name: RG_ring_1_73518 00:06:19.525 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:19.525 associated memzone info: size: 1.000366 MiB name: RG_ring_4_73518 00:06:19.525 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:19.525 associated memzone info: size: 1.000366 MiB name: RG_ring_5_73518 00:06:19.525 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:19.525 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_73518 00:06:19.525 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:19.525 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.525 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:19.525 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.525 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:19.525 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.525 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:19.525 associated memzone info: size: 0.125366 MiB name: RG_ring_2_73518 00:06:19.525 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:19.525 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.525 element at address: 0x200027e65800 with size: 0.023743 MiB 00:06:19.525 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.525 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:19.525 associated memzone info: size: 0.015991 MiB name: RG_ring_3_73518 00:06:19.525 element at address: 0x200027e6b940 with size: 0.002441 MiB 00:06:19.525 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.525 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:19.525 associated memzone info: size: 0.000183 MiB name: MP_msgpool_73518 00:06:19.525 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:19.525 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_73518 00:06:19.525 element at address: 0x200027e6c400 with size: 0.000305 MiB 00:06:19.525 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.525 04:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.525 04:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 73518 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 73518 ']' 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 73518 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73518 00:06:19.525 killing process with pid 73518 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73518' 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 73518 00:06:19.525 04:02:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 73518 00:06:20.091 00:06:20.091 real 0m1.714s 00:06:20.091 user 0m1.935s 00:06:20.091 sys 0m0.404s 00:06:20.091 ************************************ 00:06:20.091 END TEST dpdk_mem_utility 00:06:20.091 ************************************ 00:06:20.091 04:02:13 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.091 04:02:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.091 04:02:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.091 04:02:13 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:20.091 04:02:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.091 04:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.091 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:20.091 ************************************ 00:06:20.091 START TEST event 00:06:20.091 ************************************ 00:06:20.091 04:02:13 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:20.091 * Looking for test storage... 00:06:20.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:20.091 04:02:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:20.092 04:02:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.092 04:02:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.092 04:02:13 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:20.092 04:02:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.092 04:02:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.092 ************************************ 00:06:20.092 START TEST event_perf 00:06:20.092 ************************************ 00:06:20.092 04:02:13 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.092 Running I/O for 1 seconds...[2024-07-23 04:02:13.311171] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:20.092 [2024-07-23 04:02:13.311253] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73595 ] 00:06:20.092 [2024-07-23 04:02:13.431435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.350 [2024-07-23 04:02:13.448389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.350 [2024-07-23 04:02:13.519097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.350 [2024-07-23 04:02:13.519309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.350 [2024-07-23 04:02:13.519415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.350 [2024-07-23 04:02:13.519617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.285 Running I/O for 1 seconds... 00:06:21.285 lcore 0: 120815 00:06:21.285 lcore 1: 120814 00:06:21.285 lcore 2: 120815 00:06:21.285 lcore 3: 120818 00:06:21.285 done. 00:06:21.285 00:06:21.285 real 0m1.308s 00:06:21.285 user 0m4.119s 00:06:21.285 sys 0m0.068s 00:06:21.285 04:02:14 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.285 ************************************ 00:06:21.285 END TEST event_perf 00:06:21.285 ************************************ 00:06:21.285 04:02:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.544 04:02:14 event -- common/autotest_common.sh@1142 -- # return 0 00:06:21.544 04:02:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:21.544 04:02:14 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.544 04:02:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.544 04:02:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.544 ************************************ 00:06:21.544 START TEST event_reactor 00:06:21.544 ************************************ 00:06:21.544 04:02:14 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:21.544 [2024-07-23 04:02:14.674573] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:21.544 [2024-07-23 04:02:14.674669] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73628 ] 00:06:21.544 [2024-07-23 04:02:14.790407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.544 [2024-07-23 04:02:14.807366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.544 [2024-07-23 04:02:14.872766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.923 test_start 00:06:22.923 oneshot 00:06:22.923 tick 100 00:06:22.923 tick 100 00:06:22.923 tick 250 00:06:22.923 tick 100 00:06:22.923 tick 100 00:06:22.923 tick 100 00:06:22.923 tick 250 00:06:22.923 tick 500 00:06:22.923 tick 100 00:06:22.923 tick 100 00:06:22.923 tick 250 00:06:22.923 tick 100 00:06:22.923 tick 100 00:06:22.923 test_end 00:06:22.923 00:06:22.923 real 0m1.273s 00:06:22.923 user 0m1.120s 00:06:22.923 sys 0m0.048s 00:06:22.923 04:02:15 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.923 ************************************ 00:06:22.923 END TEST event_reactor 00:06:22.923 ************************************ 00:06:22.923 04:02:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 04:02:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.923 04:02:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.923 04:02:15 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:22.923 04:02:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.923 04:02:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 ************************************ 00:06:22.923 START TEST event_reactor_perf 00:06:22.923 ************************************ 00:06:22.923 04:02:15 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.923 [2024-07-23 04:02:16.002916] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:22.923 [2024-07-23 04:02:16.003006] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73658 ] 00:06:22.923 [2024-07-23 04:02:16.122028] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.923 [2024-07-23 04:02:16.139568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.923 [2024-07-23 04:02:16.201881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.307 test_start 00:06:24.307 test_end 00:06:24.307 Performance: 446558 events per second 00:06:24.307 00:06:24.307 real 0m1.311s 00:06:24.307 user 0m1.145s 00:06:24.307 sys 0m0.060s 00:06:24.307 04:02:17 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.307 ************************************ 00:06:24.307 END TEST event_reactor_perf 00:06:24.307 ************************************ 00:06:24.307 04:02:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 04:02:17 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.307 04:02:17 event -- event/event.sh@49 -- # uname -s 00:06:24.307 04:02:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.308 04:02:17 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:24.308 04:02:17 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.308 04:02:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.308 04:02:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.308 ************************************ 00:06:24.308 START TEST event_scheduler 00:06:24.308 ************************************ 00:06:24.308 04:02:17 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:24.308 * Looking for test storage... 00:06:24.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:24.308 04:02:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.308 04:02:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=73724 00:06:24.308 04:02:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.308 04:02:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.308 04:02:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 73724 00:06:24.308 04:02:17 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 73724 ']' 00:06:24.308 04:02:17 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.308 04:02:17 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.308 04:02:17 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.308 04:02:17 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.308 04:02:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.308 [2024-07-23 04:02:17.477679] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:24.308 [2024-07-23 04:02:17.477766] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73724 ] 00:06:24.308 [2024-07-23 04:02:17.598635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.308 [2024-07-23 04:02:17.610447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.566 [2024-07-23 04:02:17.715968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.567 [2024-07-23 04:02:17.716102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.567 [2024-07-23 04:02:17.716219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.567 [2024-07-23 04:02:17.716225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.147 04:02:18 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.147 04:02:18 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:25.147 04:02:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:25.147 04:02:18 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.147 04:02:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.147 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.147 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.147 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.147 POWER: Cannot set governor of lcore 0 to performance 00:06:25.147 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.147 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.147 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.147 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.147 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:25.147 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:25.147 POWER: Unable to set Power Management Environment for lcore 0 00:06:25.147 [2024-07-23 04:02:18.450774] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:25.147 [2024-07-23 04:02:18.450791] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:25.147 [2024-07-23 04:02:18.450852] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:25.147 [2024-07-23 04:02:18.450871] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:25.147 [2024-07-23 04:02:18.450882] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:25.147 [2024-07-23 04:02:18.451404] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:25.147 04:02:18 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.147 04:02:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:25.147 04:02:18 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.147 04:02:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 [2024-07-23 04:02:18.541040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.406 [2024-07-23 04:02:18.573958] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:25.406 04:02:18 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:25.406 04:02:18 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.406 04:02:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 ************************************ 00:06:25.406 START TEST scheduler_create_thread 00:06:25.406 ************************************ 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 2 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 3 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 4 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 5 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 6 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 7 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 8 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 9 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 10 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:25.406 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.407 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.407 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.407 04:02:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:25.407 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.407 04:02:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.314 04:02:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.314 04:02:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:27.314 04:02:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:27.314 04:02:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.314 04:02:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.881 04:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.881 00:06:27.881 real 0m2.613s 00:06:27.881 user 0m0.016s 00:06:27.881 sys 0m0.004s 00:06:27.881 04:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.881 ************************************ 00:06:27.881 END TEST scheduler_create_thread 00:06:27.881 ************************************ 00:06:27.881 04:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:28.139 04:02:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:28.139 04:02:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 73724 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 73724 ']' 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 73724 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73724 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:28.139 killing process with pid 73724 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73724' 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 73724 00:06:28.139 04:02:21 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 73724 00:06:28.397 [2024-07-23 04:02:21.678418] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:28.657 00:06:28.657 real 0m4.551s 00:06:28.657 user 0m8.526s 00:06:28.657 sys 0m0.395s 00:06:28.657 04:02:21 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.657 ************************************ 00:06:28.657 END TEST event_scheduler 00:06:28.657 ************************************ 00:06:28.657 04:02:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.657 04:02:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:28.657 04:02:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:28.657 04:02:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:28.657 04:02:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.657 04:02:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.657 04:02:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.657 ************************************ 00:06:28.657 START TEST app_repeat 00:06:28.657 ************************************ 00:06:28.657 04:02:21 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=73819 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.657 Process app_repeat pid: 73819 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73819' 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.657 spdk_app_start Round 0 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:28.657 04:02:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73819 /var/tmp/spdk-nbd.sock 00:06:28.657 04:02:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 73819 ']' 00:06:28.657 04:02:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.657 04:02:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.657 04:02:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.657 04:02:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.657 04:02:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.657 [2024-07-23 04:02:21.979661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:28.657 [2024-07-23 04:02:21.979765] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73819 ] 00:06:28.944 [2024-07-23 04:02:22.095965] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.944 [2024-07-23 04:02:22.111980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.944 [2024-07-23 04:02:22.168595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.944 [2024-07-23 04:02:22.168607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.944 [2024-07-23 04:02:22.221018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.202 04:02:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.202 04:02:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.202 04:02:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.202 Malloc0 00:06:29.202 04:02:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.469 Malloc1 00:06:29.469 04:02:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.469 04:02:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.730 /dev/nbd0 00:06:29.730 04:02:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.730 04:02:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.730 1+0 records in 00:06:29.730 1+0 records out 00:06:29.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296748 s, 13.8 MB/s 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:29.730 04:02:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:29.730 04:02:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.730 04:02:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.730 04:02:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.297 /dev/nbd1 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.297 1+0 records in 00:06:30.297 1+0 records out 00:06:30.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331758 s, 12.3 MB/s 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.297 04:02:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.297 { 00:06:30.297 "nbd_device": "/dev/nbd0", 00:06:30.297 "bdev_name": "Malloc0" 00:06:30.297 }, 00:06:30.297 { 00:06:30.297 "nbd_device": "/dev/nbd1", 00:06:30.297 "bdev_name": "Malloc1" 00:06:30.297 } 00:06:30.297 ]' 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.297 { 00:06:30.297 "nbd_device": "/dev/nbd0", 00:06:30.297 "bdev_name": "Malloc0" 00:06:30.297 }, 00:06:30.297 { 00:06:30.297 "nbd_device": "/dev/nbd1", 00:06:30.297 "bdev_name": "Malloc1" 00:06:30.297 } 00:06:30.297 ]' 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.297 /dev/nbd1' 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.297 /dev/nbd1' 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.297 256+0 records in 00:06:30.297 256+0 records out 00:06:30.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752269 s, 139 MB/s 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.297 04:02:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.556 256+0 records in 00:06:30.556 256+0 records out 00:06:30.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270442 s, 38.8 MB/s 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.556 256+0 records in 00:06:30.556 256+0 records out 00:06:30.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.04599 s, 22.8 MB/s 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.556 04:02:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.826 04:02:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.826 04:02:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.089 04:02:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.089 04:02:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.348 04:02:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.606 [2024-07-23 04:02:24.848322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.606 [2024-07-23 04:02:24.945889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.606 [2024-07-23 04:02:24.945920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.865 [2024-07-23 04:02:25.000517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.865 [2024-07-23 04:02:25.000643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.865 [2024-07-23 04:02:25.000660] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.399 04:02:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.399 spdk_app_start Round 1 00:06:34.399 04:02:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.399 04:02:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73819 /var/tmp/spdk-nbd.sock 00:06:34.399 04:02:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 73819 ']' 00:06:34.399 04:02:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.399 04:02:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.399 04:02:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.399 04:02:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.399 04:02:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.658 04:02:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.658 04:02:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:34.658 04:02:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.916 Malloc0 00:06:34.916 04:02:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.175 Malloc1 00:06:35.175 04:02:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.175 04:02:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.434 /dev/nbd0 00:06:35.434 04:02:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.434 04:02:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.434 1+0 records in 00:06:35.434 1+0 records out 00:06:35.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361315 s, 11.3 MB/s 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.434 04:02:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.434 04:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.434 04:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.434 04:02:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.693 /dev/nbd1 00:06:35.693 04:02:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.693 04:02:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.693 1+0 records in 00:06:35.693 1+0 records out 00:06:35.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234498 s, 17.5 MB/s 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.693 04:02:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.693 04:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.693 04:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.693 04:02:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.693 04:02:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.693 04:02:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.952 04:02:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.952 { 00:06:35.952 "nbd_device": "/dev/nbd0", 00:06:35.952 "bdev_name": "Malloc0" 00:06:35.952 }, 00:06:35.952 { 00:06:35.952 "nbd_device": "/dev/nbd1", 00:06:35.952 "bdev_name": "Malloc1" 00:06:35.952 } 00:06:35.952 ]' 00:06:35.952 04:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.952 04:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.952 { 00:06:35.952 "nbd_device": "/dev/nbd0", 00:06:35.952 "bdev_name": "Malloc0" 00:06:35.952 }, 00:06:35.952 { 00:06:35.952 "nbd_device": "/dev/nbd1", 00:06:35.952 "bdev_name": "Malloc1" 00:06:35.952 } 00:06:35.952 ]' 00:06:35.952 04:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.952 /dev/nbd1' 00:06:35.952 04:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.952 /dev/nbd1' 00:06:35.952 04:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.952 04:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.953 256+0 records in 00:06:35.953 256+0 records out 00:06:35.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105653 s, 99.2 MB/s 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.953 256+0 records in 00:06:35.953 256+0 records out 00:06:35.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187758 s, 55.8 MB/s 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.953 256+0 records in 00:06:35.953 256+0 records out 00:06:35.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284093 s, 36.9 MB/s 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.953 04:02:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.211 04:02:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.470 04:02:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.776 04:02:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.776 04:02:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.035 04:02:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.294 [2024-07-23 04:02:30.407639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.294 [2024-07-23 04:02:30.480619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.294 [2024-07-23 04:02:30.480633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.294 [2024-07-23 04:02:30.534225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.294 [2024-07-23 04:02:30.534355] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.294 [2024-07-23 04:02:30.534372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.578 spdk_app_start Round 2 00:06:40.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.578 04:02:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.578 04:02:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:40.578 04:02:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73819 /var/tmp/spdk-nbd.sock 00:06:40.578 04:02:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 73819 ']' 00:06:40.578 04:02:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.578 04:02:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.578 04:02:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.578 04:02:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.578 04:02:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.578 04:02:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.578 04:02:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:40.578 04:02:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.578 Malloc0 00:06:40.578 04:02:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.837 Malloc1 00:06:40.837 04:02:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.837 04:02:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.837 /dev/nbd0 00:06:41.096 04:02:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.096 04:02:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.096 04:02:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.097 1+0 records in 00:06:41.097 1+0 records out 00:06:41.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928309 s, 4.4 MB/s 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.097 04:02:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.097 04:02:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.097 04:02:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.097 04:02:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.097 /dev/nbd1 00:06:41.355 04:02:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.355 04:02:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.355 1+0 records in 00:06:41.355 1+0 records out 00:06:41.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228748 s, 17.9 MB/s 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.355 04:02:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.355 04:02:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.355 04:02:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.355 04:02:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.355 04:02:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.355 04:02:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.614 { 00:06:41.614 "nbd_device": "/dev/nbd0", 00:06:41.614 "bdev_name": "Malloc0" 00:06:41.614 }, 00:06:41.614 { 00:06:41.614 "nbd_device": "/dev/nbd1", 00:06:41.614 "bdev_name": "Malloc1" 00:06:41.614 } 00:06:41.614 ]' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.614 { 00:06:41.614 "nbd_device": "/dev/nbd0", 00:06:41.614 "bdev_name": "Malloc0" 00:06:41.614 }, 00:06:41.614 { 00:06:41.614 "nbd_device": "/dev/nbd1", 00:06:41.614 "bdev_name": "Malloc1" 00:06:41.614 } 00:06:41.614 ]' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.614 /dev/nbd1' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.614 /dev/nbd1' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.614 256+0 records in 00:06:41.614 256+0 records out 00:06:41.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00713978 s, 147 MB/s 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.614 256+0 records in 00:06:41.614 256+0 records out 00:06:41.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239357 s, 43.8 MB/s 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.614 256+0 records in 00:06:41.614 256+0 records out 00:06:41.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025911 s, 40.5 MB/s 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.614 04:02:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.872 04:02:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.873 04:02:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.135 04:02:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.394 04:02:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.394 04:02:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.394 04:02:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.653 04:02:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.653 04:02:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.912 04:02:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.912 [2024-07-23 04:02:36.217135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.170 [2024-07-23 04:02:36.267346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.170 [2024-07-23 04:02:36.267361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.170 [2024-07-23 04:02:36.320310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.170 [2024-07-23 04:02:36.320459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.170 [2024-07-23 04:02:36.320476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.712 04:02:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 73819 /var/tmp/spdk-nbd.sock 00:06:45.712 04:02:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 73819 ']' 00:06:45.712 04:02:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.712 04:02:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.712 04:02:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.712 04:02:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.712 04:02:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:45.974 04:02:39 event.app_repeat -- event/event.sh@39 -- # killprocess 73819 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 73819 ']' 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 73819 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73819 00:06:45.974 killing process with pid 73819 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73819' 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@967 -- # kill 73819 00:06:45.974 04:02:39 event.app_repeat -- common/autotest_common.sh@972 -- # wait 73819 00:06:46.233 spdk_app_start is called in Round 0. 00:06:46.233 Shutdown signal received, stop current app iteration 00:06:46.233 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 reinitialization... 00:06:46.233 spdk_app_start is called in Round 1. 00:06:46.233 Shutdown signal received, stop current app iteration 00:06:46.233 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 reinitialization... 00:06:46.233 spdk_app_start is called in Round 2. 00:06:46.233 Shutdown signal received, stop current app iteration 00:06:46.233 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 reinitialization... 00:06:46.233 spdk_app_start is called in Round 3. 00:06:46.233 Shutdown signal received, stop current app iteration 00:06:46.233 04:02:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.233 04:02:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.233 00:06:46.233 real 0m17.524s 00:06:46.233 user 0m39.138s 00:06:46.233 sys 0m2.565s 00:06:46.233 04:02:39 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.233 04:02:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.233 ************************************ 00:06:46.233 END TEST app_repeat 00:06:46.233 ************************************ 00:06:46.233 04:02:39 event -- common/autotest_common.sh@1142 -- # return 0 00:06:46.233 04:02:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.233 04:02:39 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.233 04:02:39 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.233 04:02:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.233 04:02:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.233 ************************************ 00:06:46.233 START TEST cpu_locks 00:06:46.233 ************************************ 00:06:46.233 04:02:39 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.492 * Looking for test storage... 00:06:46.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:46.492 04:02:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:46.492 04:02:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:46.492 04:02:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:46.492 04:02:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:46.492 04:02:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.492 04:02:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.492 04:02:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.492 ************************************ 00:06:46.492 START TEST default_locks 00:06:46.492 ************************************ 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=74236 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 74236 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 74236 ']' 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.492 04:02:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.492 [2024-07-23 04:02:39.694277] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:46.492 [2024-07-23 04:02:39.694379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74236 ] 00:06:46.492 [2024-07-23 04:02:39.816097] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.750 [2024-07-23 04:02:39.834850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.750 [2024-07-23 04:02:39.893897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.750 [2024-07-23 04:02:39.946663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.317 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.317 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:47.317 04:02:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 74236 00:06:47.317 04:02:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 74236 00:06:47.317 04:02:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 74236 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 74236 ']' 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 74236 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74236 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.883 killing process with pid 74236 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74236' 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 74236 00:06:47.883 04:02:40 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 74236 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 74236 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74236 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 74236 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 74236 ']' 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.141 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (74236) - No such process 00:06:48.141 ERROR: process (pid: 74236) is no longer running 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.141 04:02:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.142 04:02:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.142 00:06:48.142 real 0m1.696s 00:06:48.142 user 0m1.819s 00:06:48.142 sys 0m0.485s 00:06:48.142 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.142 ************************************ 00:06:48.142 END TEST default_locks 00:06:48.142 ************************************ 00:06:48.142 04:02:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.142 04:02:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:48.142 04:02:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.142 04:02:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.142 04:02:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.142 04:02:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.142 ************************************ 00:06:48.142 START TEST default_locks_via_rpc 00:06:48.142 ************************************ 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=74282 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 74282 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74282 ']' 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.142 04:02:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.142 [2024-07-23 04:02:41.440992] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:48.142 [2024-07-23 04:02:41.441081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74282 ] 00:06:48.401 [2024-07-23 04:02:41.562553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.401 [2024-07-23 04:02:41.580233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.401 [2024-07-23 04:02:41.647432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.401 [2024-07-23 04:02:41.702696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 74282 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 74282 00:06:49.342 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 74282 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 74282 ']' 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 74282 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74282 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.601 killing process with pid 74282 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74282' 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 74282 00:06:49.601 04:02:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 74282 00:06:49.859 00:06:49.859 real 0m1.781s 00:06:49.859 user 0m1.917s 00:06:49.859 sys 0m0.510s 00:06:49.859 04:02:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.859 04:02:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.859 ************************************ 00:06:49.859 END TEST default_locks_via_rpc 00:06:49.859 ************************************ 00:06:49.859 04:02:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.859 04:02:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.859 04:02:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.859 04:02:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.859 04:02:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.137 ************************************ 00:06:50.137 START TEST non_locking_app_on_locked_coremask 00:06:50.137 ************************************ 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=74333 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 74333 /var/tmp/spdk.sock 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74333 ']' 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.137 04:02:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.137 [2024-07-23 04:02:43.268504] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:50.137 [2024-07-23 04:02:43.268604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74333 ] 00:06:50.137 [2024-07-23 04:02:43.390041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.137 [2024-07-23 04:02:43.408563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.396 [2024-07-23 04:02:43.497050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.396 [2024-07-23 04:02:43.553855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.969 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.969 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.969 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=74349 00:06:50.969 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 74349 /var/tmp/spdk2.sock 00:06:50.969 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74349 ']' 00:06:50.969 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.969 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.970 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.970 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.970 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.970 04:02:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:50.970 [2024-07-23 04:02:44.272114] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:50.970 [2024-07-23 04:02:44.272201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74349 ] 00:06:51.232 [2024-07-23 04:02:44.393638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.232 [2024-07-23 04:02:44.411508] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.232 [2024-07-23 04:02:44.411557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.232 [2024-07-23 04:02:44.568599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.490 [2024-07-23 04:02:44.673283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.054 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.054 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.054 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 74333 00:06:52.054 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74333 00:06:52.054 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.015 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 74333 00:06:53.015 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74333 ']' 00:06:53.015 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 74333 00:06:53.015 04:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:53.015 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.015 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74333 00:06:53.015 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.015 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.015 killing process with pid 74333 00:06:53.015 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74333' 00:06:53.015 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 74333 00:06:53.015 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 74333 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 74349 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74349 ']' 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 74349 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74349 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.581 killing process with pid 74349 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74349' 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 74349 00:06:53.581 04:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 74349 00:06:53.839 00:06:53.839 real 0m3.926s 00:06:53.839 user 0m4.283s 00:06:53.839 sys 0m1.120s 00:06:53.839 04:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.839 04:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.839 ************************************ 00:06:53.839 END TEST non_locking_app_on_locked_coremask 00:06:53.839 ************************************ 00:06:53.839 04:02:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:53.839 04:02:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:53.839 04:02:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.839 04:02:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.839 04:02:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.098 ************************************ 00:06:54.098 START TEST locking_app_on_unlocked_coremask 00:06:54.098 ************************************ 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=74417 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 74417 /var/tmp/spdk.sock 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74417 ']' 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.098 04:02:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.098 [2024-07-23 04:02:47.248401] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:54.098 [2024-07-23 04:02:47.248511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74417 ] 00:06:54.098 [2024-07-23 04:02:47.369755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.098 [2024-07-23 04:02:47.387874] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.098 [2024-07-23 04:02:47.387933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.357 [2024-07-23 04:02:47.452373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.357 [2024-07-23 04:02:47.503834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.924 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.924 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:54.924 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=74433 00:06:54.924 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.925 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 74433 /var/tmp/spdk2.sock 00:06:54.925 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74433 ']' 00:06:54.925 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.925 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.925 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.925 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.925 04:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.925 [2024-07-23 04:02:48.235352] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:54.925 [2024-07-23 04:02:48.235475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74433 ] 00:06:55.183 [2024-07-23 04:02:48.358594] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.183 [2024-07-23 04:02:48.379931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.183 [2024-07-23 04:02:48.502979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.442 [2024-07-23 04:02:48.605770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.010 04:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.010 04:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.010 04:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 74433 00:06:56.010 04:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74433 00:06:56.010 04:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 74417 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74417 ']' 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 74417 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74417 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.951 killing process with pid 74417 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74417' 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 74417 00:06:56.951 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 74417 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 74433 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74433 ']' 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 74433 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74433 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.517 killing process with pid 74433 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74433' 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 74433 00:06:57.517 04:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 74433 00:06:58.119 00:06:58.119 real 0m4.021s 00:06:58.119 user 0m4.439s 00:06:58.119 sys 0m1.108s 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.119 ************************************ 00:06:58.119 END TEST locking_app_on_unlocked_coremask 00:06:58.119 ************************************ 00:06:58.119 04:02:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:58.119 04:02:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.119 04:02:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.119 04:02:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.119 04:02:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.119 ************************************ 00:06:58.119 START TEST locking_app_on_locked_coremask 00:06:58.119 ************************************ 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=74502 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 74502 /var/tmp/spdk.sock 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74502 ']' 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.119 04:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.119 [2024-07-23 04:02:51.329267] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:58.119 [2024-07-23 04:02:51.329389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74502 ] 00:06:58.119 [2024-07-23 04:02:51.450642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.378 [2024-07-23 04:02:51.464274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.378 [2024-07-23 04:02:51.540366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.378 [2024-07-23 04:02:51.601444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.943 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.943 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=74518 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 74518 /var/tmp/spdk2.sock 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74518 /var/tmp/spdk2.sock 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 74518 /var/tmp/spdk2.sock 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 74518 ']' 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.201 04:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.201 [2024-07-23 04:02:52.353957] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:06:59.201 [2024-07-23 04:02:52.354057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74518 ] 00:06:59.201 [2024-07-23 04:02:52.476981] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.201 [2024-07-23 04:02:52.500384] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 74502 has claimed it. 00:06:59.201 [2024-07-23 04:02:52.500465] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.768 ERROR: process (pid: 74518) is no longer running 00:06:59.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (74518) - No such process 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 74502 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74502 00:06:59.768 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 74502 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 74502 ']' 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 74502 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74502 00:07:00.346 killing process with pid 74502 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74502' 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 74502 00:07:00.346 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 74502 00:07:00.606 00:07:00.606 real 0m2.575s 00:07:00.606 user 0m2.971s 00:07:00.606 sys 0m0.647s 00:07:00.606 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.606 ************************************ 00:07:00.606 END TEST locking_app_on_locked_coremask 00:07:00.606 ************************************ 00:07:00.606 04:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.606 04:02:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:00.606 04:02:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:00.606 04:02:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.606 04:02:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.606 04:02:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.606 ************************************ 00:07:00.606 START TEST locking_overlapped_coremask 00:07:00.606 ************************************ 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74558 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 74558 /var/tmp/spdk.sock 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 74558 ']' 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.606 04:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.865 [2024-07-23 04:02:53.953499] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:00.865 [2024-07-23 04:02:53.953581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74558 ] 00:07:00.865 [2024-07-23 04:02:54.075733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.865 [2024-07-23 04:02:54.093168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.865 [2024-07-23 04:02:54.148338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.865 [2024-07-23 04:02:54.148479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.865 [2024-07-23 04:02:54.148481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.865 [2024-07-23 04:02:54.201782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74576 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74576 /var/tmp/spdk2.sock 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74576 /var/tmp/spdk2.sock 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 74576 /var/tmp/spdk2.sock 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 74576 ']' 00:07:01.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.806 04:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.806 [2024-07-23 04:02:54.967540] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:01.806 [2024-07-23 04:02:54.967632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74576 ] 00:07:01.806 [2024-07-23 04:02:55.091968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.806 [2024-07-23 04:02:55.113287] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74558 has claimed it. 00:07:01.806 [2024-07-23 04:02:55.113365] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.374 ERROR: process (pid: 74576) is no longer running 00:07:02.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (74576) - No such process 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 74558 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 74558 ']' 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 74558 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74558 00:07:02.374 killing process with pid 74558 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74558' 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 74558 00:07:02.374 04:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 74558 00:07:02.941 00:07:02.941 real 0m2.150s 00:07:02.941 user 0m6.065s 00:07:02.941 sys 0m0.415s 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.941 ************************************ 00:07:02.941 END TEST locking_overlapped_coremask 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.941 ************************************ 00:07:02.941 04:02:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:02.941 04:02:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.941 04:02:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.941 04:02:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.941 04:02:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.941 ************************************ 00:07:02.941 START TEST locking_overlapped_coremask_via_rpc 00:07:02.941 ************************************ 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74627 00:07:02.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 74627 /var/tmp/spdk.sock 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74627 ']' 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.941 04:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.941 [2024-07-23 04:02:56.139322] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:02.942 [2024-07-23 04:02:56.139411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74627 ] 00:07:02.942 [2024-07-23 04:02:56.256046] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.942 [2024-07-23 04:02:56.272465] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.942 [2024-07-23 04:02:56.272642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.200 [2024-07-23 04:02:56.342028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.200 [2024-07-23 04:02:56.342187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.200 [2024-07-23 04:02:56.342192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.200 [2024-07-23 04:02:56.397907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74641 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 74641 /var/tmp/spdk2.sock 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74641 ']' 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.767 04:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.031 [2024-07-23 04:02:57.119851] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:04.031 [2024-07-23 04:02:57.120091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74641 ] 00:07:04.031 [2024-07-23 04:02:57.242741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.031 [2024-07-23 04:02:57.259206] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.031 [2024-07-23 04:02:57.259241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.301 [2024-07-23 04:02:57.462942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.302 [2024-07-23 04:02:57.463075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.302 [2024-07-23 04:02:57.463075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.302 [2024-07-23 04:02:57.572382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:04.880 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.881 [2024-07-23 04:02:58.084056] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74627 has claimed it. 00:07:04.881 request: 00:07:04.881 { 00:07:04.881 "method": "framework_enable_cpumask_locks", 00:07:04.881 "req_id": 1 00:07:04.881 } 00:07:04.881 Got JSON-RPC error response 00:07:04.881 response: 00:07:04.881 { 00:07:04.881 "code": -32603, 00:07:04.881 "message": "Failed to claim CPU core: 2" 00:07:04.881 } 00:07:04.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 74627 /var/tmp/spdk.sock 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74627 ']' 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.881 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 74641 /var/tmp/spdk2.sock 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 74641 ']' 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.139 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.396 ************************************ 00:07:05.396 END TEST locking_overlapped_coremask_via_rpc 00:07:05.396 ************************************ 00:07:05.396 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.396 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.396 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:05.396 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.396 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.396 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.396 00:07:05.396 real 0m2.503s 00:07:05.396 user 0m1.241s 00:07:05.396 sys 0m0.190s 00:07:05.397 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.397 04:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:05.397 04:02:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:05.397 04:02:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74627 ]] 00:07:05.397 04:02:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74627 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 74627 ']' 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 74627 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74627 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.397 killing process with pid 74627 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74627' 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 74627 00:07:05.397 04:02:58 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 74627 00:07:05.962 04:02:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74641 ]] 00:07:05.962 04:02:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74641 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 74641 ']' 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 74641 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74641 00:07:05.962 killing process with pid 74641 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74641' 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 74641 00:07:05.962 04:02:59 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 74641 00:07:06.220 04:02:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.220 04:02:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:06.220 04:02:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74627 ]] 00:07:06.220 04:02:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74627 00:07:06.220 04:02:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 74627 ']' 00:07:06.220 04:02:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 74627 00:07:06.220 Process with pid 74627 is not found 00:07:06.220 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (74627) - No such process 00:07:06.220 04:02:59 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 74627 is not found' 00:07:06.220 04:02:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74641 ]] 00:07:06.220 04:02:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74641 00:07:06.220 04:02:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 74641 ']' 00:07:06.220 04:02:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 74641 00:07:06.220 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (74641) - No such process 00:07:06.220 04:02:59 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 74641 is not found' 00:07:06.220 Process with pid 74641 is not found 00:07:06.220 04:02:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.220 ************************************ 00:07:06.220 END TEST cpu_locks 00:07:06.220 ************************************ 00:07:06.220 00:07:06.220 real 0m19.921s 00:07:06.220 user 0m34.691s 00:07:06.220 sys 0m5.296s 00:07:06.220 04:02:59 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.220 04:02:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 04:02:59 event -- common/autotest_common.sh@1142 -- # return 0 00:07:06.220 ************************************ 00:07:06.220 END TEST event 00:07:06.220 ************************************ 00:07:06.220 00:07:06.220 real 0m46.286s 00:07:06.220 user 1m28.864s 00:07:06.220 sys 0m8.672s 00:07:06.220 04:02:59 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.220 04:02:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 04:02:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:06.220 04:02:59 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:06.220 04:02:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.220 04:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.220 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 ************************************ 00:07:06.220 START TEST thread 00:07:06.220 ************************************ 00:07:06.220 04:02:59 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:06.478 * Looking for test storage... 00:07:06.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:06.478 04:02:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.478 04:02:59 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:06.478 04:02:59 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.478 04:02:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.478 ************************************ 00:07:06.478 START TEST thread_poller_perf 00:07:06.478 ************************************ 00:07:06.478 04:02:59 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.478 [2024-07-23 04:02:59.649111] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:06.478 [2024-07-23 04:02:59.649393] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74762 ] 00:07:06.478 [2024-07-23 04:02:59.769874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.478 [2024-07-23 04:02:59.787815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.755 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:06.755 [2024-07-23 04:02:59.855009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.688 ====================================== 00:07:07.688 busy:2208181376 (cyc) 00:07:07.688 total_run_count: 377000 00:07:07.688 tsc_hz: 2200000000 (cyc) 00:07:07.688 ====================================== 00:07:07.688 poller_cost: 5857 (cyc), 2662 (nsec) 00:07:07.688 00:07:07.688 real 0m1.303s 00:07:07.688 user 0m1.141s 00:07:07.688 sys 0m0.054s 00:07:07.688 04:03:00 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.688 ************************************ 00:07:07.688 END TEST thread_poller_perf 00:07:07.688 04:03:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.688 ************************************ 00:07:07.688 04:03:00 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:07.688 04:03:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.688 04:03:00 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:07.688 04:03:00 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.688 04:03:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.688 ************************************ 00:07:07.688 START TEST thread_poller_perf 00:07:07.688 ************************************ 00:07:07.688 04:03:00 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.688 [2024-07-23 04:03:01.011831] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:07.688 [2024-07-23 04:03:01.011944] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74792 ] 00:07:07.953 [2024-07-23 04:03:01.131681] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.953 [2024-07-23 04:03:01.146504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.953 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:07.953 [2024-07-23 04:03:01.229021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.346 ====================================== 00:07:09.346 busy:2202337622 (cyc) 00:07:09.346 total_run_count: 4744000 00:07:09.346 tsc_hz: 2200000000 (cyc) 00:07:09.346 ====================================== 00:07:09.346 poller_cost: 464 (cyc), 210 (nsec) 00:07:09.346 ************************************ 00:07:09.346 END TEST thread_poller_perf 00:07:09.346 ************************************ 00:07:09.346 00:07:09.346 real 0m1.313s 00:07:09.346 user 0m1.151s 00:07:09.346 sys 0m0.054s 00:07:09.346 04:03:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.346 04:03:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.346 04:03:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:09.346 04:03:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:09.346 ************************************ 00:07:09.346 END TEST thread 00:07:09.346 ************************************ 00:07:09.346 00:07:09.346 real 0m2.808s 00:07:09.346 user 0m2.366s 00:07:09.346 sys 0m0.218s 00:07:09.346 04:03:02 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.346 04:03:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.346 04:03:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:09.346 04:03:02 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:09.346 04:03:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.346 04:03:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.346 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:09.346 ************************************ 00:07:09.346 START TEST accel 00:07:09.346 ************************************ 00:07:09.346 04:03:02 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:09.346 * Looking for test storage... 00:07:09.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:09.346 04:03:02 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:09.346 04:03:02 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:09.346 04:03:02 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:09.346 04:03:02 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=74872 00:07:09.346 04:03:02 accel -- accel/accel.sh@63 -- # waitforlisten 74872 00:07:09.346 04:03:02 accel -- common/autotest_common.sh@829 -- # '[' -z 74872 ']' 00:07:09.346 04:03:02 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.346 04:03:02 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.346 04:03:02 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:09.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.346 04:03:02 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.346 04:03:02 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:09.346 04:03:02 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.346 04:03:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.346 04:03:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.346 04:03:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.346 04:03:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.346 04:03:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.346 04:03:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:09.346 04:03:02 accel -- accel/accel.sh@41 -- # jq -r . 00:07:09.346 04:03:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.346 [2024-07-23 04:03:02.560679] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:09.346 [2024-07-23 04:03:02.560793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74872 ] 00:07:09.346 [2024-07-23 04:03:02.683743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.604 [2024-07-23 04:03:02.700792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.604 [2024-07-23 04:03:02.758095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.604 [2024-07-23 04:03:02.810609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.537 04:03:03 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.537 04:03:03 accel -- common/autotest_common.sh@862 -- # return 0 00:07:10.537 04:03:03 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:10.537 04:03:03 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:10.537 04:03:03 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:10.537 04:03:03 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:10.537 04:03:03 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:10.537 04:03:03 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:10.537 04:03:03 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.537 04:03:03 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:10.537 04:03:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.537 04:03:03 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.537 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.537 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.537 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.537 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.537 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.537 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.537 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.537 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.537 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.537 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.537 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # IFS== 00:07:10.538 04:03:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:10.538 04:03:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:10.538 04:03:03 accel -- accel/accel.sh@75 -- # killprocess 74872 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@948 -- # '[' -z 74872 ']' 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@952 -- # kill -0 74872 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@953 -- # uname 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74872 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74872' 00:07:10.538 killing process with pid 74872 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@967 -- # kill 74872 00:07:10.538 04:03:03 accel -- common/autotest_common.sh@972 -- # wait 74872 00:07:10.801 04:03:03 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:10.801 04:03:03 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:10.801 04:03:03 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.801 04:03:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.801 04:03:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.801 04:03:04 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:10.801 04:03:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:10.801 04:03:04 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.801 04:03:04 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:10.801 04:03:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.801 04:03:04 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:10.801 04:03:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.801 04:03:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.801 04:03:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.801 ************************************ 00:07:10.801 START TEST accel_missing_filename 00:07:10.801 ************************************ 00:07:10.801 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:10.801 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:10.801 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:10.801 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.801 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.801 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.801 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.801 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:10.801 04:03:04 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:10.801 [2024-07-23 04:03:04.101006] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:10.801 [2024-07-23 04:03:04.101095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74918 ] 00:07:11.090 [2024-07-23 04:03:04.221045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.090 [2024-07-23 04:03:04.241253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.090 [2024-07-23 04:03:04.309463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.090 [2024-07-23 04:03:04.362614] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.349 [2024-07-23 04:03:04.439138] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:11.349 A filename is required. 00:07:11.349 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:11.349 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.349 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:11.349 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:11.349 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:11.349 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.349 00:07:11.349 real 0m0.436s 00:07:11.349 user 0m0.268s 00:07:11.349 sys 0m0.111s 00:07:11.349 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.349 ************************************ 00:07:11.349 END TEST accel_missing_filename 00:07:11.349 ************************************ 00:07:11.349 04:03:04 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:11.349 04:03:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.349 04:03:04 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.349 04:03:04 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:11.349 04:03:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.349 04:03:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.349 ************************************ 00:07:11.349 START TEST accel_compress_verify 00:07:11.349 ************************************ 00:07:11.349 04:03:04 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.349 04:03:04 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:11.349 04:03:04 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.349 04:03:04 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:11.349 04:03:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.349 04:03:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:11.349 04:03:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.349 04:03:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:11.349 04:03:04 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:11.349 [2024-07-23 04:03:04.588816] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:11.349 [2024-07-23 04:03:04.588945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74948 ] 00:07:11.607 [2024-07-23 04:03:04.709800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.607 [2024-07-23 04:03:04.727604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.607 [2024-07-23 04:03:04.821437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.607 [2024-07-23 04:03:04.884063] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.866 [2024-07-23 04:03:04.956336] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:11.866 00:07:11.866 Compression does not support the verify option, aborting. 00:07:11.866 04:03:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:11.866 04:03:05 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.866 04:03:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:11.866 04:03:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:11.866 04:03:05 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:11.866 04:03:05 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.866 00:07:11.866 real 0m0.455s 00:07:11.866 user 0m0.287s 00:07:11.866 sys 0m0.117s 00:07:11.866 ************************************ 00:07:11.866 END TEST accel_compress_verify 00:07:11.866 ************************************ 00:07:11.866 04:03:05 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.866 04:03:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:11.866 04:03:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.866 04:03:05 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:11.866 04:03:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.866 04:03:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.866 04:03:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.866 ************************************ 00:07:11.866 START TEST accel_wrong_workload 00:07:11.866 ************************************ 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:11.866 04:03:05 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:11.866 Unsupported workload type: foobar 00:07:11.866 [2024-07-23 04:03:05.093760] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:11.866 accel_perf options: 00:07:11.866 [-h help message] 00:07:11.866 [-q queue depth per core] 00:07:11.866 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:11.866 [-T number of threads per core 00:07:11.866 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:11.866 [-t time in seconds] 00:07:11.866 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:11.866 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:11.866 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:11.866 [-l for compress/decompress workloads, name of uncompressed input file 00:07:11.866 [-S for crc32c workload, use this seed value (default 0) 00:07:11.866 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:11.866 [-f for fill workload, use this BYTE value (default 255) 00:07:11.866 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:11.866 [-y verify result if this switch is on] 00:07:11.866 [-a tasks to allocate per core (default: same value as -q)] 00:07:11.866 Can be used to spread operations across a wider range of memory. 00:07:11.866 ************************************ 00:07:11.866 END TEST accel_wrong_workload 00:07:11.866 ************************************ 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.866 00:07:11.866 real 0m0.031s 00:07:11.866 user 0m0.018s 00:07:11.866 sys 0m0.012s 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.866 04:03:05 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:11.866 04:03:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.866 04:03:05 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:11.866 04:03:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:11.866 04:03:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.866 04:03:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.866 ************************************ 00:07:11.866 START TEST accel_negative_buffers 00:07:11.866 ************************************ 00:07:11.866 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:11.866 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:11.866 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:11.866 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:11.866 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.866 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:11.866 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.866 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:11.866 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:11.866 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:11.866 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.867 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.867 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.867 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.867 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.867 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:11.867 04:03:05 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:11.867 -x option must be non-negative. 00:07:11.867 [2024-07-23 04:03:05.177833] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:11.867 accel_perf options: 00:07:11.867 [-h help message] 00:07:11.867 [-q queue depth per core] 00:07:11.867 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:11.867 [-T number of threads per core 00:07:11.867 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:11.867 [-t time in seconds] 00:07:11.867 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:11.867 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:11.867 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:11.867 [-l for compress/decompress workloads, name of uncompressed input file 00:07:11.867 [-S for crc32c workload, use this seed value (default 0) 00:07:11.867 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:11.867 [-f for fill workload, use this BYTE value (default 255) 00:07:11.867 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:11.867 [-y verify result if this switch is on] 00:07:11.867 [-a tasks to allocate per core (default: same value as -q)] 00:07:11.867 Can be used to spread operations across a wider range of memory. 00:07:11.867 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:11.867 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.867 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.867 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.867 00:07:11.867 real 0m0.032s 00:07:11.867 user 0m0.017s 00:07:11.867 sys 0m0.015s 00:07:11.867 ************************************ 00:07:11.867 END TEST accel_negative_buffers 00:07:11.867 ************************************ 00:07:11.867 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.867 04:03:05 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:12.126 04:03:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.126 04:03:05 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:12.126 04:03:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.126 04:03:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.126 04:03:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.126 ************************************ 00:07:12.126 START TEST accel_crc32c 00:07:12.126 ************************************ 00:07:12.126 04:03:05 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:12.126 04:03:05 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:12.126 [2024-07-23 04:03:05.265280] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:12.126 [2024-07-23 04:03:05.265383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75007 ] 00:07:12.126 [2024-07-23 04:03:05.385865] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.126 [2024-07-23 04:03:05.403995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.384 [2024-07-23 04:03:05.468245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.384 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.385 04:03:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:13.762 04:03:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.762 00:07:13.762 real 0m1.430s 00:07:13.762 user 0m1.223s 00:07:13.762 sys 0m0.115s 00:07:13.762 04:03:06 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.762 04:03:06 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:13.762 ************************************ 00:07:13.762 END TEST accel_crc32c 00:07:13.762 ************************************ 00:07:13.762 04:03:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.762 04:03:06 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:13.762 04:03:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:13.762 04:03:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.762 04:03:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.762 ************************************ 00:07:13.762 START TEST accel_crc32c_C2 00:07:13.762 ************************************ 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.762 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:13.763 [2024-07-23 04:03:06.742198] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:13.763 [2024-07-23 04:03:06.742289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75041 ] 00:07:13.763 [2024-07-23 04:03:06.857513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.763 [2024-07-23 04:03:06.873236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.763 [2024-07-23 04:03:06.941906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.763 04:03:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.140 00:07:15.140 real 0m1.425s 00:07:15.140 user 0m1.223s 00:07:15.140 sys 0m0.108s 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.140 04:03:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:15.140 ************************************ 00:07:15.140 END TEST accel_crc32c_C2 00:07:15.140 ************************************ 00:07:15.140 04:03:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.140 04:03:08 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:15.140 04:03:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:15.140 04:03:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.140 04:03:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.140 ************************************ 00:07:15.140 START TEST accel_copy 00:07:15.140 ************************************ 00:07:15.140 04:03:08 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:15.140 [2024-07-23 04:03:08.216121] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:15.140 [2024-07-23 04:03:08.216203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75070 ] 00:07:15.140 [2024-07-23 04:03:08.336535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.140 [2024-07-23 04:03:08.354961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.140 [2024-07-23 04:03:08.420421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.140 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.408 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:15.409 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.410 04:03:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:16.355 04:03:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.355 00:07:16.355 real 0m1.423s 00:07:16.355 user 0m1.219s 00:07:16.355 sys 0m0.112s 00:07:16.355 04:03:09 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.355 04:03:09 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.355 ************************************ 00:07:16.355 END TEST accel_copy 00:07:16.355 ************************************ 00:07:16.355 04:03:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.355 04:03:09 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.355 04:03:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:16.355 04:03:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.355 04:03:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.355 ************************************ 00:07:16.355 START TEST accel_fill 00:07:16.355 ************************************ 00:07:16.355 04:03:09 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:16.355 04:03:09 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:16.355 [2024-07-23 04:03:09.689420] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:16.355 [2024-07-23 04:03:09.689577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75109 ] 00:07:16.614 [2024-07-23 04:03:09.814199] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.614 [2024-07-23 04:03:09.834568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.614 [2024-07-23 04:03:09.926175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 04:03:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:18.277 04:03:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.277 00:07:18.277 real 0m1.552s 00:07:18.277 user 0m1.305s 00:07:18.277 sys 0m0.151s 00:07:18.277 04:03:11 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.277 04:03:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:18.277 ************************************ 00:07:18.277 END TEST accel_fill 00:07:18.277 ************************************ 00:07:18.277 04:03:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.277 04:03:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:18.277 04:03:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:18.277 04:03:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.277 04:03:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.277 ************************************ 00:07:18.277 START TEST accel_copy_crc32c 00:07:18.277 ************************************ 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:18.277 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:18.277 [2024-07-23 04:03:11.294075] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:18.277 [2024-07-23 04:03:11.294719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75139 ] 00:07:18.278 [2024-07-23 04:03:11.420012] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.278 [2024-07-23 04:03:11.433172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.278 [2024-07-23 04:03:11.513724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 04:03:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.655 00:07:19.655 real 0m1.513s 00:07:19.655 user 0m1.268s 00:07:19.655 sys 0m0.151s 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.655 04:03:12 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:19.655 ************************************ 00:07:19.655 END TEST accel_copy_crc32c 00:07:19.655 ************************************ 00:07:19.655 04:03:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.655 04:03:12 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:19.655 04:03:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:19.655 04:03:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.655 04:03:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.655 ************************************ 00:07:19.655 START TEST accel_copy_crc32c_C2 00:07:19.655 ************************************ 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.655 04:03:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:19.655 [2024-07-23 04:03:12.856438] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:19.655 [2024-07-23 04:03:12.857004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 00:07:19.655 [2024-07-23 04:03:12.977710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.655 [2024-07-23 04:03:12.990450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.914 [2024-07-23 04:03:13.068241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.914 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.915 04:03:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.289 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.289 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.290 ************************************ 00:07:21.290 END TEST accel_copy_crc32c_C2 00:07:21.290 ************************************ 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.290 00:07:21.290 real 0m1.462s 00:07:21.290 user 0m1.220s 00:07:21.290 sys 0m0.145s 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.290 04:03:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:21.290 04:03:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.290 04:03:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:21.290 04:03:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.290 04:03:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.290 04:03:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.290 ************************************ 00:07:21.290 START TEST accel_dualcast 00:07:21.290 ************************************ 00:07:21.290 04:03:14 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:21.290 04:03:14 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:21.290 [2024-07-23 04:03:14.371768] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:21.290 [2024-07-23 04:03:14.371858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75208 ] 00:07:21.290 [2024-07-23 04:03:14.491999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.290 [2024-07-23 04:03:14.507939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.290 [2024-07-23 04:03:14.585109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.549 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.550 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.550 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.550 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.550 04:03:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.550 04:03:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.550 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.550 04:03:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.484 04:03:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.485 ************************************ 00:07:22.485 END TEST accel_dualcast 00:07:22.485 ************************************ 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:22.485 04:03:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.485 00:07:22.485 real 0m1.449s 00:07:22.485 user 0m1.236s 00:07:22.485 sys 0m0.118s 00:07:22.485 04:03:15 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.485 04:03:15 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:22.744 04:03:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.744 04:03:15 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:22.744 04:03:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:22.744 04:03:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.744 04:03:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.744 ************************************ 00:07:22.744 START TEST accel_compare 00:07:22.744 ************************************ 00:07:22.744 04:03:15 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:22.744 04:03:15 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:22.744 [2024-07-23 04:03:15.873528] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:22.744 [2024-07-23 04:03:15.873646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75243 ] 00:07:22.744 [2024-07-23 04:03:15.994621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.744 [2024-07-23 04:03:16.013193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.003 [2024-07-23 04:03:16.104266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.003 04:03:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:24.381 ************************************ 00:07:24.381 END TEST accel_compare 00:07:24.381 ************************************ 00:07:24.381 04:03:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.381 00:07:24.381 real 0m1.452s 00:07:24.381 user 0m1.233s 00:07:24.381 sys 0m0.124s 00:07:24.381 04:03:17 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.381 04:03:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:24.381 04:03:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.381 04:03:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:24.381 04:03:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:24.382 04:03:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.382 04:03:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.382 ************************************ 00:07:24.382 START TEST accel_xor 00:07:24.382 ************************************ 00:07:24.382 04:03:17 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:24.382 [2024-07-23 04:03:17.371970] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:24.382 [2024-07-23 04:03:17.372063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75277 ] 00:07:24.382 [2024-07-23 04:03:17.492035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.382 [2024-07-23 04:03:17.510699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.382 [2024-07-23 04:03:17.569463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.382 04:03:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.759 00:07:25.759 real 0m1.420s 00:07:25.759 user 0m1.221s 00:07:25.759 sys 0m0.102s 00:07:25.759 04:03:18 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.759 04:03:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:25.759 ************************************ 00:07:25.759 END TEST accel_xor 00:07:25.759 ************************************ 00:07:25.759 04:03:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.759 04:03:18 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:25.759 04:03:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:25.759 04:03:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.759 04:03:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.759 ************************************ 00:07:25.759 START TEST accel_xor 00:07:25.759 ************************************ 00:07:25.759 04:03:18 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:25.759 04:03:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:25.759 [2024-07-23 04:03:18.834766] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:25.760 [2024-07-23 04:03:18.834922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75312 ] 00:07:25.760 [2024-07-23 04:03:18.951712] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.760 [2024-07-23 04:03:18.970956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.760 [2024-07-23 04:03:19.064239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:26.018 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.019 04:03:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.980 04:03:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:26.981 04:03:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.981 00:07:26.981 real 0m1.462s 00:07:26.981 user 0m1.246s 00:07:26.981 sys 0m0.120s 00:07:26.981 04:03:20 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.981 ************************************ 00:07:26.981 END TEST accel_xor 00:07:26.981 ************************************ 00:07:26.981 04:03:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:26.981 04:03:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.981 04:03:20 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:26.981 04:03:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:26.981 04:03:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.981 04:03:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.239 ************************************ 00:07:27.239 START TEST accel_dif_verify 00:07:27.239 ************************************ 00:07:27.239 04:03:20 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:27.239 04:03:20 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:27.239 [2024-07-23 04:03:20.352009] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:27.239 [2024-07-23 04:03:20.352102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75346 ] 00:07:27.239 [2024-07-23 04:03:20.472106] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.239 [2024-07-23 04:03:20.492254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.239 [2024-07-23 04:03:20.581702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:27.498 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:27.499 04:03:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 ************************************ 00:07:28.876 END TEST accel_dif_verify 00:07:28.876 ************************************ 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:28.876 04:03:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.876 00:07:28.876 real 0m1.472s 00:07:28.876 user 0m1.264s 00:07:28.876 sys 0m0.113s 00:07:28.876 04:03:21 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.876 04:03:21 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:28.876 04:03:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.876 04:03:21 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:28.876 04:03:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:28.876 04:03:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.876 04:03:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.876 ************************************ 00:07:28.876 START TEST accel_dif_generate 00:07:28.876 ************************************ 00:07:28.876 04:03:21 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:28.876 04:03:21 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:28.876 [2024-07-23 04:03:21.885191] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:28.876 [2024-07-23 04:03:21.885286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75383 ] 00:07:28.876 [2024-07-23 04:03:22.006626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.876 [2024-07-23 04:03:22.027282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.876 [2024-07-23 04:03:22.123452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.876 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:28.877 04:03:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.252 ************************************ 00:07:30.252 END TEST accel_dif_generate 00:07:30.252 ************************************ 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:30.252 04:03:23 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.252 00:07:30.252 real 0m1.479s 00:07:30.252 user 0m1.263s 00:07:30.252 sys 0m0.124s 00:07:30.252 04:03:23 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.252 04:03:23 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:30.252 04:03:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.252 04:03:23 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:30.252 04:03:23 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:30.252 04:03:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.252 04:03:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.252 ************************************ 00:07:30.252 START TEST accel_dif_generate_copy 00:07:30.252 ************************************ 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:30.252 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:30.252 [2024-07-23 04:03:23.404521] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:30.252 [2024-07-23 04:03:23.404615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75417 ] 00:07:30.252 [2024-07-23 04:03:23.524517] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.252 [2024-07-23 04:03:23.543556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.510 [2024-07-23 04:03:23.616206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.511 04:03:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.885 00:07:31.885 real 0m1.439s 00:07:31.885 user 0m1.230s 00:07:31.885 sys 0m0.114s 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.885 ************************************ 00:07:31.885 END TEST accel_dif_generate_copy 00:07:31.885 ************************************ 00:07:31.885 04:03:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:31.885 04:03:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.885 04:03:24 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:31.885 04:03:24 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.885 04:03:24 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:31.885 04:03:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.885 04:03:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.885 ************************************ 00:07:31.885 START TEST accel_comp 00:07:31.885 ************************************ 00:07:31.885 04:03:24 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.885 04:03:24 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.886 04:03:24 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:31.886 04:03:24 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:31.886 [2024-07-23 04:03:24.879372] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:31.886 [2024-07-23 04:03:24.879485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75452 ] 00:07:31.886 [2024-07-23 04:03:25.002022] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.886 [2024-07-23 04:03:25.019122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.886 [2024-07-23 04:03:25.093011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:31.886 04:03:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:33.289 04:03:26 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.289 00:07:33.289 real 0m1.448s 00:07:33.289 user 0m1.229s 00:07:33.289 sys 0m0.123s 00:07:33.289 04:03:26 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.289 04:03:26 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:33.289 ************************************ 00:07:33.289 END TEST accel_comp 00:07:33.289 ************************************ 00:07:33.289 04:03:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.289 04:03:26 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.289 04:03:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:33.289 04:03:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.289 04:03:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.289 ************************************ 00:07:33.289 START TEST accel_decomp 00:07:33.289 ************************************ 00:07:33.289 04:03:26 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:33.289 04:03:26 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:33.289 [2024-07-23 04:03:26.383567] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:33.289 [2024-07-23 04:03:26.383671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75481 ] 00:07:33.289 [2024-07-23 04:03:26.503507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.289 [2024-07-23 04:03:26.514754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.289 [2024-07-23 04:03:26.590507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.548 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:33.549 04:03:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.483 04:03:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.484 00:07:34.484 real 0m1.435s 00:07:34.484 user 0m1.219s 00:07:34.484 sys 0m0.124s 00:07:34.484 04:03:27 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.484 04:03:27 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:34.484 ************************************ 00:07:34.484 END TEST accel_decomp 00:07:34.484 ************************************ 00:07:34.743 04:03:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.743 04:03:27 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.743 04:03:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:34.743 04:03:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.743 04:03:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.743 ************************************ 00:07:34.743 START TEST accel_decomp_full 00:07:34.743 ************************************ 00:07:34.743 04:03:27 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:34.743 04:03:27 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:34.743 [2024-07-23 04:03:27.875069] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:34.743 [2024-07-23 04:03:27.875209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75521 ] 00:07:34.743 [2024-07-23 04:03:27.996165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.743 [2024-07-23 04:03:28.014617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.743 [2024-07-23 04:03:28.066290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.016 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.017 04:03:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.965 ************************************ 00:07:35.965 END TEST accel_decomp_full 00:07:35.965 ************************************ 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.965 04:03:29 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.965 00:07:35.965 real 0m1.411s 00:07:35.965 user 0m1.212s 00:07:35.965 sys 0m0.109s 00:07:35.965 04:03:29 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.965 04:03:29 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:35.965 04:03:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.965 04:03:29 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:35.965 04:03:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:35.965 04:03:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.965 04:03:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.224 ************************************ 00:07:36.224 START TEST accel_decomp_mcore 00:07:36.224 ************************************ 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:36.224 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:36.224 [2024-07-23 04:03:29.341719] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:36.224 [2024-07-23 04:03:29.341807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75550 ] 00:07:36.224 [2024-07-23 04:03:29.463094] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.224 [2024-07-23 04:03:29.478745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.224 [2024-07-23 04:03:29.533665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.224 [2024-07-23 04:03:29.533805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.224 [2024-07-23 04:03:29.533973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.224 [2024-07-23 04:03:29.534195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.483 04:03:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.422 00:07:37.422 real 0m1.436s 00:07:37.422 user 0m4.598s 00:07:37.422 sys 0m0.130s 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.422 04:03:30 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:37.422 ************************************ 00:07:37.422 END TEST accel_decomp_mcore 00:07:37.422 ************************************ 00:07:37.681 04:03:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.681 04:03:30 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.681 04:03:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:37.681 04:03:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.681 04:03:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.681 ************************************ 00:07:37.681 START TEST accel_decomp_full_mcore 00:07:37.681 ************************************ 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:37.681 04:03:30 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:37.681 [2024-07-23 04:03:30.828513] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:37.681 [2024-07-23 04:03:30.828599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75593 ] 00:07:37.681 [2024-07-23 04:03:30.943544] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.681 [2024-07-23 04:03:30.958166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.940 [2024-07-23 04:03:31.053497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.940 [2024-07-23 04:03:31.053643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.940 [2024-07-23 04:03:31.053724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.940 [2024-07-23 04:03:31.054005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:37.940 04:03:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.317 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 ************************************ 00:07:39.318 END TEST accel_decomp_full_mcore 00:07:39.318 ************************************ 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.318 00:07:39.318 real 0m1.485s 00:07:39.318 user 0m4.691s 00:07:39.318 sys 0m0.133s 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.318 04:03:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:39.318 04:03:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.318 04:03:32 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.318 04:03:32 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:39.318 04:03:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.318 04:03:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.318 ************************************ 00:07:39.318 START TEST accel_decomp_mthread 00:07:39.318 ************************************ 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:39.318 [2024-07-23 04:03:32.366237] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:39.318 [2024-07-23 04:03:32.366315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75625 ] 00:07:39.318 [2024-07-23 04:03:32.480962] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.318 [2024-07-23 04:03:32.497436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.318 [2024-07-23 04:03:32.555839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.318 04:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.694 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.694 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.694 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.695 00:07:40.695 real 0m1.417s 00:07:40.695 user 0m1.217s 00:07:40.695 sys 0m0.108s 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.695 04:03:33 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:40.695 ************************************ 00:07:40.695 END TEST accel_decomp_mthread 00:07:40.695 ************************************ 00:07:40.695 04:03:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.695 04:03:33 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.695 04:03:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:40.695 04:03:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.695 04:03:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.695 ************************************ 00:07:40.695 START TEST accel_decomp_full_mthread 00:07:40.695 ************************************ 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:40.695 04:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:40.695 [2024-07-23 04:03:33.833217] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:40.695 [2024-07-23 04:03:33.833336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75660 ] 00:07:40.695 [2024-07-23 04:03:33.954300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.695 [2024-07-23 04:03:33.971348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.695 [2024-07-23 04:03:34.028442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.970 04:03:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.917 00:07:41.917 real 0m1.434s 00:07:41.917 user 0m1.221s 00:07:41.917 sys 0m0.120s 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.917 ************************************ 00:07:41.917 END TEST accel_decomp_full_mthread 00:07:41.917 ************************************ 00:07:41.917 04:03:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:42.176 04:03:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.176 04:03:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:42.176 04:03:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:42.176 04:03:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:42.176 04:03:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.176 04:03:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.176 04:03:35 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:42.176 04:03:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.176 04:03:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.176 04:03:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.176 04:03:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.176 04:03:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:42.176 04:03:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.176 04:03:35 accel -- accel/accel.sh@41 -- # jq -r . 00:07:42.176 ************************************ 00:07:42.176 START TEST accel_dif_functional_tests 00:07:42.176 ************************************ 00:07:42.176 04:03:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:42.177 [2024-07-23 04:03:35.352302] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:42.177 [2024-07-23 04:03:35.352393] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75695 ] 00:07:42.177 [2024-07-23 04:03:35.473721] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.177 [2024-07-23 04:03:35.492178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.436 [2024-07-23 04:03:35.555083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.436 [2024-07-23 04:03:35.555154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.436 [2024-07-23 04:03:35.555160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.436 [2024-07-23 04:03:35.618273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.436 00:07:42.436 00:07:42.436 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.436 http://cunit.sourceforge.net/ 00:07:42.436 00:07:42.436 00:07:42.436 Suite: accel_dif 00:07:42.436 Test: verify: DIF generated, GUARD check ...passed 00:07:42.436 Test: verify: DIF generated, APPTAG check ...passed 00:07:42.436 Test: verify: DIF generated, REFTAG check ...passed 00:07:42.436 Test: verify: DIF not generated, GUARD check ...passed 00:07:42.436 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 04:03:35.652415] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.436 [2024-07-23 04:03:35.652572] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.436 passed 00:07:42.436 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 04:03:35.652723] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.436 passed 00:07:42.436 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:42.436 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 04:03:35.652852] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:42.436 passed 00:07:42.436 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:42.436 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:42.436 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:42.436 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 04:03:35.653249] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:42.436 passed 00:07:42.436 Test: verify copy: DIF generated, GUARD check ...passed 00:07:42.436 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:42.436 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:42.436 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 04:03:35.653794] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:42.436 passed 00:07:42.436 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 04:03:35.653949] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:42.436 passed 00:07:42.436 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:42.436 Test: generate copy: DIF generated, GUARD check ...[2024-07-23 04:03:35.654006] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:42.436 passed 00:07:42.436 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:42.436 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:42.436 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:42.436 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:42.436 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:42.436 Test: generate copy: iovecs-len validate ...passed 00:07:42.436 Test: generate copy: buffer alignment validate ...[2024-07-23 04:03:35.654631] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:42.436 passed 00:07:42.436 00:07:42.436 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.436 suites 1 1 n/a 0 0 00:07:42.436 tests 26 26 26 0 0 00:07:42.436 asserts 115 115 115 0 n/a 00:07:42.436 00:07:42.436 Elapsed time = 0.006 seconds 00:07:42.695 00:07:42.695 real 0m0.543s 00:07:42.695 user 0m0.731s 00:07:42.695 sys 0m0.156s 00:07:42.695 04:03:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.695 ************************************ 00:07:42.695 END TEST accel_dif_functional_tests 00:07:42.695 ************************************ 00:07:42.695 04:03:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:42.695 04:03:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.695 00:07:42.695 real 0m33.475s 00:07:42.695 user 0m35.132s 00:07:42.695 sys 0m4.047s 00:07:42.695 ************************************ 00:07:42.695 END TEST accel 00:07:42.695 ************************************ 00:07:42.695 04:03:35 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.695 04:03:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.695 04:03:35 -- common/autotest_common.sh@1142 -- # return 0 00:07:42.695 04:03:35 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:42.695 04:03:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.695 04:03:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.695 04:03:35 -- common/autotest_common.sh@10 -- # set +x 00:07:42.695 ************************************ 00:07:42.695 START TEST accel_rpc 00:07:42.695 ************************************ 00:07:42.695 04:03:35 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:42.695 * Looking for test storage... 00:07:42.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:42.695 04:03:36 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:42.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.695 04:03:36 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=75765 00:07:42.695 04:03:36 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 75765 00:07:42.695 04:03:36 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:42.695 04:03:36 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 75765 ']' 00:07:42.695 04:03:36 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.695 04:03:36 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.695 04:03:36 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.695 04:03:36 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.695 04:03:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.954 [2024-07-23 04:03:36.069629] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:42.954 [2024-07-23 04:03:36.069721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75765 ] 00:07:42.954 [2024-07-23 04:03:36.191510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.954 [2024-07-23 04:03:36.210412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.954 [2024-07-23 04:03:36.274538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.890 04:03:37 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.890 04:03:37 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:43.890 04:03:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:43.890 04:03:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:43.890 04:03:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:43.890 04:03:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:43.890 04:03:37 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:43.890 04:03:37 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.890 04:03:37 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.890 04:03:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.890 ************************************ 00:07:43.890 START TEST accel_assign_opcode 00:07:43.890 ************************************ 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.890 [2024-07-23 04:03:37.063078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.890 [2024-07-23 04:03:37.075080] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.890 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:43.890 [2024-07-23 04:03:37.134601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.149 software 00:07:44.149 ************************************ 00:07:44.149 END TEST accel_assign_opcode 00:07:44.149 ************************************ 00:07:44.149 00:07:44.149 real 0m0.279s 00:07:44.149 user 0m0.053s 00:07:44.149 sys 0m0.014s 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.149 04:03:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:44.149 04:03:37 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 75765 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 75765 ']' 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 75765 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75765 00:07:44.149 killing process with pid 75765 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75765' 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@967 -- # kill 75765 00:07:44.149 04:03:37 accel_rpc -- common/autotest_common.sh@972 -- # wait 75765 00:07:44.717 00:07:44.717 real 0m1.856s 00:07:44.717 user 0m1.985s 00:07:44.717 sys 0m0.430s 00:07:44.717 04:03:37 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.717 04:03:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.717 ************************************ 00:07:44.717 END TEST accel_rpc 00:07:44.717 ************************************ 00:07:44.717 04:03:37 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.717 04:03:37 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:44.717 04:03:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.717 04:03:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.717 04:03:37 -- common/autotest_common.sh@10 -- # set +x 00:07:44.717 ************************************ 00:07:44.717 START TEST app_cmdline 00:07:44.717 ************************************ 00:07:44.717 04:03:37 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:44.717 * Looking for test storage... 00:07:44.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:44.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.717 04:03:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:44.717 04:03:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=75853 00:07:44.717 04:03:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:44.717 04:03:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 75853 00:07:44.717 04:03:37 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 75853 ']' 00:07:44.717 04:03:37 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.717 04:03:37 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.717 04:03:37 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.717 04:03:37 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.717 04:03:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.717 [2024-07-23 04:03:37.974883] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:44.717 [2024-07-23 04:03:37.974981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75853 ] 00:07:44.975 [2024-07-23 04:03:38.093889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.976 [2024-07-23 04:03:38.112277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.976 [2024-07-23 04:03:38.170131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.976 [2024-07-23 04:03:38.225613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.911 04:03:38 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.911 04:03:38 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:45.911 04:03:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:45.911 { 00:07:45.911 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:07:45.911 "fields": { 00:07:45.911 "major": 24, 00:07:45.911 "minor": 9, 00:07:45.911 "patch": 0, 00:07:45.911 "suffix": "-pre", 00:07:45.911 "commit": "f7b31b2b9" 00:07:45.911 } 00:07:45.911 } 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:45.911 04:03:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:45.911 04:03:39 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.170 request: 00:07:46.170 { 00:07:46.170 "method": "env_dpdk_get_mem_stats", 00:07:46.170 "req_id": 1 00:07:46.170 } 00:07:46.170 Got JSON-RPC error response 00:07:46.170 response: 00:07:46.170 { 00:07:46.170 "code": -32601, 00:07:46.170 "message": "Method not found" 00:07:46.170 } 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.170 04:03:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 75853 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 75853 ']' 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 75853 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75853 00:07:46.170 killing process with pid 75853 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75853' 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@967 -- # kill 75853 00:07:46.170 04:03:39 app_cmdline -- common/autotest_common.sh@972 -- # wait 75853 00:07:46.737 ************************************ 00:07:46.737 END TEST app_cmdline 00:07:46.737 ************************************ 00:07:46.737 00:07:46.737 real 0m2.001s 00:07:46.737 user 0m2.470s 00:07:46.737 sys 0m0.475s 00:07:46.737 04:03:39 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.737 04:03:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.737 04:03:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.737 04:03:39 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:46.737 04:03:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.737 04:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.737 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:46.737 ************************************ 00:07:46.737 START TEST version 00:07:46.737 ************************************ 00:07:46.737 04:03:39 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:46.737 * Looking for test storage... 00:07:46.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:46.737 04:03:39 version -- app/version.sh@17 -- # get_header_version major 00:07:46.737 04:03:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.737 04:03:39 version -- app/version.sh@14 -- # cut -f2 00:07:46.737 04:03:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.737 04:03:39 version -- app/version.sh@17 -- # major=24 00:07:46.737 04:03:39 version -- app/version.sh@18 -- # get_header_version minor 00:07:46.737 04:03:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.737 04:03:39 version -- app/version.sh@14 -- # cut -f2 00:07:46.737 04:03:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.737 04:03:39 version -- app/version.sh@18 -- # minor=9 00:07:46.737 04:03:39 version -- app/version.sh@19 -- # get_header_version patch 00:07:46.737 04:03:39 version -- app/version.sh@14 -- # cut -f2 00:07:46.737 04:03:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.737 04:03:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.737 04:03:39 version -- app/version.sh@19 -- # patch=0 00:07:46.737 04:03:39 version -- app/version.sh@20 -- # get_header_version suffix 00:07:46.737 04:03:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.737 04:03:39 version -- app/version.sh@14 -- # cut -f2 00:07:46.737 04:03:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.737 04:03:39 version -- app/version.sh@20 -- # suffix=-pre 00:07:46.737 04:03:39 version -- app/version.sh@22 -- # version=24.9 00:07:46.738 04:03:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:46.738 04:03:39 version -- app/version.sh@28 -- # version=24.9rc0 00:07:46.738 04:03:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:46.738 04:03:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:46.738 04:03:40 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:46.738 04:03:40 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:46.738 00:07:46.738 real 0m0.152s 00:07:46.738 user 0m0.087s 00:07:46.738 sys 0m0.095s 00:07:46.738 04:03:40 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.738 04:03:40 version -- common/autotest_common.sh@10 -- # set +x 00:07:46.738 ************************************ 00:07:46.738 END TEST version 00:07:46.738 ************************************ 00:07:46.997 04:03:40 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.997 04:03:40 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:46.997 04:03:40 -- spdk/autotest.sh@198 -- # uname -s 00:07:46.997 04:03:40 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:46.997 04:03:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:46.997 04:03:40 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:07:46.997 04:03:40 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:07:46.997 04:03:40 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:46.997 04:03:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.997 04:03:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.997 04:03:40 -- common/autotest_common.sh@10 -- # set +x 00:07:46.997 ************************************ 00:07:46.997 START TEST spdk_dd 00:07:46.997 ************************************ 00:07:46.997 04:03:40 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:46.997 * Looking for test storage... 00:07:46.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:46.997 04:03:40 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.997 04:03:40 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.997 04:03:40 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.997 04:03:40 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.997 04:03:40 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.997 04:03:40 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.997 04:03:40 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.997 04:03:40 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:46.997 04:03:40 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.997 04:03:40 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:47.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.256 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:47.256 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:47.256 04:03:40 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:47.256 04:03:40 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@230 -- # local class 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@232 -- # local progif 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@233 -- # class=01 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:47.256 04:03:40 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:07:47.516 04:03:40 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:47.516 04:03:40 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.516 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:47.517 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:47.518 * spdk_dd linked to liburing 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:47.518 04:03:40 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:47.518 04:03:40 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:47.518 04:03:40 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:47.518 04:03:40 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:47.518 04:03:40 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.518 04:03:40 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.518 04:03:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:47.518 ************************************ 00:07:47.518 START TEST spdk_dd_basic_rw 00:07:47.518 ************************************ 00:07:47.518 04:03:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:47.518 * Looking for test storage... 00:07:47.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:47.518 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.518 04:03:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.518 04:03:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.518 04:03:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.518 04:03:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:47.519 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:47.780 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:47.780 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.781 ************************************ 00:07:47.781 START TEST dd_bs_lt_native_bs 00:07:47.781 ************************************ 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.781 04:03:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:47.781 { 00:07:47.781 "subsystems": [ 00:07:47.781 { 00:07:47.781 "subsystem": "bdev", 00:07:47.781 "config": [ 00:07:47.781 { 00:07:47.781 "params": { 00:07:47.781 "trtype": "pcie", 00:07:47.781 "traddr": "0000:00:10.0", 00:07:47.781 "name": "Nvme0" 00:07:47.781 }, 00:07:47.781 "method": "bdev_nvme_attach_controller" 00:07:47.781 }, 00:07:47.781 { 00:07:47.781 "method": "bdev_wait_for_examine" 00:07:47.781 } 00:07:47.781 ] 00:07:47.781 } 00:07:47.781 ] 00:07:47.781 } 00:07:47.781 [2024-07-23 04:03:41.041416] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:47.781 [2024-07-23 04:03:41.041498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76175 ] 00:07:48.040 [2024-07-23 04:03:41.163464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.040 [2024-07-23 04:03:41.184087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.040 [2024-07-23 04:03:41.281017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.040 [2024-07-23 04:03:41.340663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.298 [2024-07-23 04:03:41.447093] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:48.298 [2024-07-23 04:03:41.447162] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.299 [2024-07-23 04:03:41.569889] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:48.557 00:07:48.557 real 0m0.674s 00:07:48.557 user 0m0.439s 00:07:48.557 sys 0m0.185s 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.557 ************************************ 00:07:48.557 END TEST dd_bs_lt_native_bs 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:48.557 ************************************ 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:48.557 04:03:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.558 ************************************ 00:07:48.558 START TEST dd_rw 00:07:48.558 ************************************ 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:48.558 04:03:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 04:03:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:49.125 04:03:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:49.125 04:03:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.125 04:03:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.125 [2024-07-23 04:03:42.451811] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:49.125 [2024-07-23 04:03:42.451918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76211 ] 00:07:49.125 { 00:07:49.125 "subsystems": [ 00:07:49.125 { 00:07:49.125 "subsystem": "bdev", 00:07:49.125 "config": [ 00:07:49.125 { 00:07:49.125 "params": { 00:07:49.125 "trtype": "pcie", 00:07:49.125 "traddr": "0000:00:10.0", 00:07:49.125 "name": "Nvme0" 00:07:49.125 }, 00:07:49.125 "method": "bdev_nvme_attach_controller" 00:07:49.125 }, 00:07:49.125 { 00:07:49.125 "method": "bdev_wait_for_examine" 00:07:49.125 } 00:07:49.125 ] 00:07:49.125 } 00:07:49.125 ] 00:07:49.125 } 00:07:49.384 [2024-07-23 04:03:42.575293] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.384 [2024-07-23 04:03:42.595555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.384 [2024-07-23 04:03:42.691697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.643 [2024-07-23 04:03:42.753885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.902  Copying: 60/60 [kB] (average 29 MBps) 00:07:49.902 00:07:49.902 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:49.902 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:49.902 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.902 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.902 [2024-07-23 04:03:43.130620] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:49.902 [2024-07-23 04:03:43.130715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76225 ] 00:07:49.902 { 00:07:49.902 "subsystems": [ 00:07:49.902 { 00:07:49.902 "subsystem": "bdev", 00:07:49.902 "config": [ 00:07:49.902 { 00:07:49.902 "params": { 00:07:49.902 "trtype": "pcie", 00:07:49.902 "traddr": "0000:00:10.0", 00:07:49.902 "name": "Nvme0" 00:07:49.902 }, 00:07:49.902 "method": "bdev_nvme_attach_controller" 00:07:49.902 }, 00:07:49.902 { 00:07:49.902 "method": "bdev_wait_for_examine" 00:07:49.902 } 00:07:49.902 ] 00:07:49.902 } 00:07:49.902 ] 00:07:49.902 } 00:07:50.162 [2024-07-23 04:03:43.252651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.162 [2024-07-23 04:03:43.271281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.162 [2024-07-23 04:03:43.365338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.162 [2024-07-23 04:03:43.425911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.424  Copying: 60/60 [kB] (average 19 MBps) 00:07:50.424 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:50.424 04:03:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.682 { 00:07:50.682 "subsystems": [ 00:07:50.682 { 00:07:50.682 "subsystem": "bdev", 00:07:50.682 "config": [ 00:07:50.682 { 00:07:50.682 "params": { 00:07:50.682 "trtype": "pcie", 00:07:50.682 "traddr": "0000:00:10.0", 00:07:50.682 "name": "Nvme0" 00:07:50.682 }, 00:07:50.682 "method": "bdev_nvme_attach_controller" 00:07:50.682 }, 00:07:50.682 { 00:07:50.682 "method": "bdev_wait_for_examine" 00:07:50.682 } 00:07:50.682 ] 00:07:50.682 } 00:07:50.682 ] 00:07:50.682 } 00:07:50.682 [2024-07-23 04:03:43.812615] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:50.682 [2024-07-23 04:03:43.812736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76240 ] 00:07:50.682 [2024-07-23 04:03:43.934688] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.682 [2024-07-23 04:03:43.950489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.941 [2024-07-23 04:03:44.058038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.941 [2024-07-23 04:03:44.121130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.199  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:51.199 00:07:51.199 04:03:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:51.199 04:03:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:51.199 04:03:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:51.199 04:03:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:51.199 04:03:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:51.199 04:03:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:51.199 04:03:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.135 04:03:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:52.135 04:03:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:52.135 04:03:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:52.135 04:03:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.135 [2024-07-23 04:03:45.259261] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:52.135 [2024-07-23 04:03:45.259561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76265 ] 00:07:52.135 { 00:07:52.135 "subsystems": [ 00:07:52.135 { 00:07:52.135 "subsystem": "bdev", 00:07:52.135 "config": [ 00:07:52.135 { 00:07:52.135 "params": { 00:07:52.135 "trtype": "pcie", 00:07:52.135 "traddr": "0000:00:10.0", 00:07:52.135 "name": "Nvme0" 00:07:52.135 }, 00:07:52.135 "method": "bdev_nvme_attach_controller" 00:07:52.135 }, 00:07:52.135 { 00:07:52.135 "method": "bdev_wait_for_examine" 00:07:52.135 } 00:07:52.135 ] 00:07:52.135 } 00:07:52.135 ] 00:07:52.135 } 00:07:52.135 [2024-07-23 04:03:45.382306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.135 [2024-07-23 04:03:45.402800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.394 [2024-07-23 04:03:45.500674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.394 [2024-07-23 04:03:45.561593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.653  Copying: 60/60 [kB] (average 58 MBps) 00:07:52.653 00:07:52.653 04:03:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:52.653 04:03:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:52.653 04:03:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:52.653 04:03:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.653 [2024-07-23 04:03:45.937639] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:52.653 [2024-07-23 04:03:45.937748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76278 ] 00:07:52.653 { 00:07:52.653 "subsystems": [ 00:07:52.653 { 00:07:52.653 "subsystem": "bdev", 00:07:52.653 "config": [ 00:07:52.653 { 00:07:52.653 "params": { 00:07:52.653 "trtype": "pcie", 00:07:52.653 "traddr": "0000:00:10.0", 00:07:52.653 "name": "Nvme0" 00:07:52.653 }, 00:07:52.653 "method": "bdev_nvme_attach_controller" 00:07:52.653 }, 00:07:52.653 { 00:07:52.653 "method": "bdev_wait_for_examine" 00:07:52.653 } 00:07:52.653 ] 00:07:52.653 } 00:07:52.653 ] 00:07:52.653 } 00:07:52.911 [2024-07-23 04:03:46.060385] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.911 [2024-07-23 04:03:46.081164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.911 [2024-07-23 04:03:46.172358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.911 [2024-07-23 04:03:46.228926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.428  Copying: 60/60 [kB] (average 29 MBps) 00:07:53.428 00:07:53.428 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.428 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:53.428 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:53.428 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:53.428 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:53.428 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:53.428 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:53.429 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:53.429 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:53.429 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:53.429 04:03:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:53.429 [2024-07-23 04:03:46.619444] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:53.429 [2024-07-23 04:03:46.619553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76299 ] 00:07:53.429 { 00:07:53.429 "subsystems": [ 00:07:53.429 { 00:07:53.429 "subsystem": "bdev", 00:07:53.429 "config": [ 00:07:53.429 { 00:07:53.429 "params": { 00:07:53.429 "trtype": "pcie", 00:07:53.429 "traddr": "0000:00:10.0", 00:07:53.429 "name": "Nvme0" 00:07:53.429 }, 00:07:53.429 "method": "bdev_nvme_attach_controller" 00:07:53.429 }, 00:07:53.429 { 00:07:53.429 "method": "bdev_wait_for_examine" 00:07:53.429 } 00:07:53.429 ] 00:07:53.429 } 00:07:53.429 ] 00:07:53.429 } 00:07:53.429 [2024-07-23 04:03:46.741551] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.429 [2024-07-23 04:03:46.758446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.687 [2024-07-23 04:03:46.844841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.687 [2024-07-23 04:03:46.900527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.946  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:53.946 00:07:53.946 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:53.946 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:53.946 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:53.946 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:53.946 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:53.946 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:53.946 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:53.946 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.512 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:54.512 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:54.512 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:54.512 04:03:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.770 [2024-07-23 04:03:47.880747] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:54.770 [2024-07-23 04:03:47.880878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76318 ] 00:07:54.770 { 00:07:54.770 "subsystems": [ 00:07:54.770 { 00:07:54.770 "subsystem": "bdev", 00:07:54.770 "config": [ 00:07:54.770 { 00:07:54.770 "params": { 00:07:54.770 "trtype": "pcie", 00:07:54.770 "traddr": "0000:00:10.0", 00:07:54.770 "name": "Nvme0" 00:07:54.770 }, 00:07:54.770 "method": "bdev_nvme_attach_controller" 00:07:54.770 }, 00:07:54.770 { 00:07:54.770 "method": "bdev_wait_for_examine" 00:07:54.770 } 00:07:54.770 ] 00:07:54.770 } 00:07:54.770 ] 00:07:54.770 } 00:07:54.770 [2024-07-23 04:03:48.006075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.770 [2024-07-23 04:03:48.024292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.770 [2024-07-23 04:03:48.108737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.029 [2024-07-23 04:03:48.163262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.287  Copying: 56/56 [kB] (average 54 MBps) 00:07:55.287 00:07:55.287 04:03:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:55.287 04:03:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:55.287 04:03:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:55.287 04:03:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.287 [2024-07-23 04:03:48.510185] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:55.287 [2024-07-23 04:03:48.510270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76332 ] 00:07:55.287 { 00:07:55.287 "subsystems": [ 00:07:55.287 { 00:07:55.287 "subsystem": "bdev", 00:07:55.287 "config": [ 00:07:55.287 { 00:07:55.287 "params": { 00:07:55.287 "trtype": "pcie", 00:07:55.287 "traddr": "0000:00:10.0", 00:07:55.287 "name": "Nvme0" 00:07:55.287 }, 00:07:55.287 "method": "bdev_nvme_attach_controller" 00:07:55.287 }, 00:07:55.287 { 00:07:55.287 "method": "bdev_wait_for_examine" 00:07:55.287 } 00:07:55.287 ] 00:07:55.287 } 00:07:55.287 ] 00:07:55.287 } 00:07:55.545 [2024-07-23 04:03:48.632402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.545 [2024-07-23 04:03:48.651478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.545 [2024-07-23 04:03:48.741083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.545 [2024-07-23 04:03:48.798911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.805  Copying: 56/56 [kB] (average 27 MBps) 00:07:55.805 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:55.805 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.064 [2024-07-23 04:03:49.168514] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:56.064 [2024-07-23 04:03:49.168615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76353 ] 00:07:56.064 { 00:07:56.064 "subsystems": [ 00:07:56.064 { 00:07:56.064 "subsystem": "bdev", 00:07:56.064 "config": [ 00:07:56.064 { 00:07:56.064 "params": { 00:07:56.064 "trtype": "pcie", 00:07:56.064 "traddr": "0000:00:10.0", 00:07:56.064 "name": "Nvme0" 00:07:56.064 }, 00:07:56.064 "method": "bdev_nvme_attach_controller" 00:07:56.064 }, 00:07:56.064 { 00:07:56.064 "method": "bdev_wait_for_examine" 00:07:56.064 } 00:07:56.064 ] 00:07:56.064 } 00:07:56.064 ] 00:07:56.064 } 00:07:56.064 [2024-07-23 04:03:49.291681] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.064 [2024-07-23 04:03:49.308128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.064 [2024-07-23 04:03:49.387803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.322 [2024-07-23 04:03:49.441651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.580  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:56.580 00:07:56.580 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:56.580 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:56.580 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:56.580 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:56.580 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:56.580 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:56.581 04:03:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.147 04:03:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:57.147 04:03:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:57.147 04:03:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:57.147 04:03:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.147 [2024-07-23 04:03:50.436936] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:57.147 [2024-07-23 04:03:50.437067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76372 ] 00:07:57.147 { 00:07:57.147 "subsystems": [ 00:07:57.147 { 00:07:57.147 "subsystem": "bdev", 00:07:57.147 "config": [ 00:07:57.147 { 00:07:57.147 "params": { 00:07:57.147 "trtype": "pcie", 00:07:57.147 "traddr": "0000:00:10.0", 00:07:57.147 "name": "Nvme0" 00:07:57.147 }, 00:07:57.147 "method": "bdev_nvme_attach_controller" 00:07:57.147 }, 00:07:57.147 { 00:07:57.147 "method": "bdev_wait_for_examine" 00:07:57.147 } 00:07:57.147 ] 00:07:57.147 } 00:07:57.147 ] 00:07:57.147 } 00:07:57.405 [2024-07-23 04:03:50.558501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.405 [2024-07-23 04:03:50.578124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.405 [2024-07-23 04:03:50.672596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.405 [2024-07-23 04:03:50.737692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.921  Copying: 56/56 [kB] (average 54 MBps) 00:07:57.921 00:07:57.921 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:57.921 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:57.921 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:57.921 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.921 [2024-07-23 04:03:51.104452] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:57.921 [2024-07-23 04:03:51.104549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76385 ] 00:07:57.921 { 00:07:57.921 "subsystems": [ 00:07:57.921 { 00:07:57.921 "subsystem": "bdev", 00:07:57.921 "config": [ 00:07:57.921 { 00:07:57.921 "params": { 00:07:57.921 "trtype": "pcie", 00:07:57.921 "traddr": "0000:00:10.0", 00:07:57.921 "name": "Nvme0" 00:07:57.921 }, 00:07:57.921 "method": "bdev_nvme_attach_controller" 00:07:57.921 }, 00:07:57.921 { 00:07:57.921 "method": "bdev_wait_for_examine" 00:07:57.921 } 00:07:57.921 ] 00:07:57.921 } 00:07:57.921 ] 00:07:57.921 } 00:07:57.921 [2024-07-23 04:03:51.228096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.921 [2024-07-23 04:03:51.244685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.179 [2024-07-23 04:03:51.342877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.179 [2024-07-23 04:03:51.402727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.438  Copying: 56/56 [kB] (average 54 MBps) 00:07:58.438 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:58.438 04:03:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:58.438 [2024-07-23 04:03:51.774641] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:58.438 [2024-07-23 04:03:51.774728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76401 ] 00:07:58.438 { 00:07:58.438 "subsystems": [ 00:07:58.438 { 00:07:58.438 "subsystem": "bdev", 00:07:58.438 "config": [ 00:07:58.438 { 00:07:58.438 "params": { 00:07:58.438 "trtype": "pcie", 00:07:58.438 "traddr": "0000:00:10.0", 00:07:58.438 "name": "Nvme0" 00:07:58.438 }, 00:07:58.438 "method": "bdev_nvme_attach_controller" 00:07:58.438 }, 00:07:58.438 { 00:07:58.438 "method": "bdev_wait_for_examine" 00:07:58.438 } 00:07:58.438 ] 00:07:58.438 } 00:07:58.438 ] 00:07:58.438 } 00:07:58.696 [2024-07-23 04:03:51.897999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.696 [2024-07-23 04:03:51.944642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.954 [2024-07-23 04:03:52.056510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.954 [2024-07-23 04:03:52.110786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.213  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:59.213 00:07:59.213 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:59.213 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:59.213 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:59.213 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:59.213 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:59.213 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:59.213 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:59.213 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.779 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:59.779 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:59.779 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.780 04:03:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.780 [2024-07-23 04:03:53.013936] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:07:59.780 [2024-07-23 04:03:53.014072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76425 ] 00:07:59.780 { 00:07:59.780 "subsystems": [ 00:07:59.780 { 00:07:59.780 "subsystem": "bdev", 00:07:59.780 "config": [ 00:07:59.780 { 00:07:59.780 "params": { 00:07:59.780 "trtype": "pcie", 00:07:59.780 "traddr": "0000:00:10.0", 00:07:59.780 "name": "Nvme0" 00:07:59.780 }, 00:07:59.780 "method": "bdev_nvme_attach_controller" 00:07:59.780 }, 00:07:59.780 { 00:07:59.780 "method": "bdev_wait_for_examine" 00:07:59.780 } 00:07:59.780 ] 00:07:59.780 } 00:07:59.780 ] 00:07:59.780 } 00:08:00.038 [2024-07-23 04:03:53.130336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.038 [2024-07-23 04:03:53.148293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.038 [2024-07-23 04:03:53.214007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.038 [2024-07-23 04:03:53.268383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.296  Copying: 48/48 [kB] (average 46 MBps) 00:08:00.296 00:08:00.296 04:03:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:00.296 04:03:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:00.296 04:03:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:00.296 04:03:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.296 [2024-07-23 04:03:53.629173] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:00.296 [2024-07-23 04:03:53.629258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76439 ] 00:08:00.296 { 00:08:00.296 "subsystems": [ 00:08:00.296 { 00:08:00.296 "subsystem": "bdev", 00:08:00.296 "config": [ 00:08:00.296 { 00:08:00.296 "params": { 00:08:00.296 "trtype": "pcie", 00:08:00.296 "traddr": "0000:00:10.0", 00:08:00.296 "name": "Nvme0" 00:08:00.296 }, 00:08:00.296 "method": "bdev_nvme_attach_controller" 00:08:00.296 }, 00:08:00.296 { 00:08:00.296 "method": "bdev_wait_for_examine" 00:08:00.296 } 00:08:00.296 ] 00:08:00.296 } 00:08:00.296 ] 00:08:00.296 } 00:08:00.555 [2024-07-23 04:03:53.750272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.555 [2024-07-23 04:03:53.770503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.555 [2024-07-23 04:03:53.852308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.813 [2024-07-23 04:03:53.911939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.070  Copying: 48/48 [kB] (average 46 MBps) 00:08:01.070 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.070 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.070 { 00:08:01.070 "subsystems": [ 00:08:01.070 { 00:08:01.070 "subsystem": "bdev", 00:08:01.070 "config": [ 00:08:01.070 { 00:08:01.070 "params": { 00:08:01.070 "trtype": "pcie", 00:08:01.070 "traddr": "0000:00:10.0", 00:08:01.070 "name": "Nvme0" 00:08:01.070 }, 00:08:01.070 "method": "bdev_nvme_attach_controller" 00:08:01.070 }, 00:08:01.070 { 00:08:01.070 "method": "bdev_wait_for_examine" 00:08:01.070 } 00:08:01.070 ] 00:08:01.070 } 00:08:01.070 ] 00:08:01.070 } 00:08:01.070 [2024-07-23 04:03:54.289747] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:01.070 [2024-07-23 04:03:54.289857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76460 ] 00:08:01.328 [2024-07-23 04:03:54.416672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.328 [2024-07-23 04:03:54.429369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.328 [2024-07-23 04:03:54.500380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.328 [2024-07-23 04:03:54.555359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.585  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:01.585 00:08:01.585 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:01.585 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:01.585 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:01.585 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:01.585 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:01.585 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:01.585 04:03:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.150 04:03:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:02.150 04:03:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:02.150 04:03:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.150 04:03:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.150 { 00:08:02.150 "subsystems": [ 00:08:02.150 { 00:08:02.150 "subsystem": "bdev", 00:08:02.150 "config": [ 00:08:02.150 { 00:08:02.150 "params": { 00:08:02.150 "trtype": "pcie", 00:08:02.150 "traddr": "0000:00:10.0", 00:08:02.150 "name": "Nvme0" 00:08:02.150 }, 00:08:02.150 "method": "bdev_nvme_attach_controller" 00:08:02.150 }, 00:08:02.150 { 00:08:02.150 "method": "bdev_wait_for_examine" 00:08:02.150 } 00:08:02.150 ] 00:08:02.150 } 00:08:02.150 ] 00:08:02.150 } 00:08:02.150 [2024-07-23 04:03:55.411775] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:02.150 [2024-07-23 04:03:55.412237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76479 ] 00:08:02.408 [2024-07-23 04:03:55.538049] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.408 [2024-07-23 04:03:55.556088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.408 [2024-07-23 04:03:55.640378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.408 [2024-07-23 04:03:55.691819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.666  Copying: 48/48 [kB] (average 46 MBps) 00:08:02.666 00:08:02.666 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:02.666 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:02.666 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.666 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.924 [2024-07-23 04:03:56.049220] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:02.924 [2024-07-23 04:03:56.049567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76491 ] 00:08:02.924 { 00:08:02.924 "subsystems": [ 00:08:02.924 { 00:08:02.925 "subsystem": "bdev", 00:08:02.925 "config": [ 00:08:02.925 { 00:08:02.925 "params": { 00:08:02.925 "trtype": "pcie", 00:08:02.925 "traddr": "0000:00:10.0", 00:08:02.925 "name": "Nvme0" 00:08:02.925 }, 00:08:02.925 "method": "bdev_nvme_attach_controller" 00:08:02.925 }, 00:08:02.925 { 00:08:02.925 "method": "bdev_wait_for_examine" 00:08:02.925 } 00:08:02.925 ] 00:08:02.925 } 00:08:02.925 ] 00:08:02.925 } 00:08:02.925 [2024-07-23 04:03:56.166610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.925 [2024-07-23 04:03:56.181768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.925 [2024-07-23 04:03:56.251696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.183 [2024-07-23 04:03:56.304630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.441  Copying: 48/48 [kB] (average 46 MBps) 00:08:03.441 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.441 04:03:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.441 [2024-07-23 04:03:56.675882] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:03.441 [2024-07-23 04:03:56.675984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76508 ] 00:08:03.441 { 00:08:03.441 "subsystems": [ 00:08:03.441 { 00:08:03.441 "subsystem": "bdev", 00:08:03.441 "config": [ 00:08:03.441 { 00:08:03.441 "params": { 00:08:03.441 "trtype": "pcie", 00:08:03.441 "traddr": "0000:00:10.0", 00:08:03.441 "name": "Nvme0" 00:08:03.441 }, 00:08:03.441 "method": "bdev_nvme_attach_controller" 00:08:03.441 }, 00:08:03.441 { 00:08:03.441 "method": "bdev_wait_for_examine" 00:08:03.441 } 00:08:03.441 ] 00:08:03.441 } 00:08:03.441 ] 00:08:03.441 } 00:08:03.700 [2024-07-23 04:03:56.791583] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.700 [2024-07-23 04:03:56.807596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.700 [2024-07-23 04:03:56.859091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.700 [2024-07-23 04:03:56.911766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.958  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:03.958 00:08:03.958 00:08:03.958 real 0m15.505s 00:08:03.958 user 0m11.321s 00:08:03.958 sys 0m5.636s 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.958 ************************************ 00:08:03.958 END TEST dd_rw 00:08:03.958 ************************************ 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.958 ************************************ 00:08:03.958 START TEST dd_rw_offset 00:08:03.958 ************************************ 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:03.958 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:03.959 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:03.959 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:04.223 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:04.223 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=we9tsg4wflu0yq58al482u8k63k9q33mfo1i5zsxmtpgidqq9dnq6euwzwz097hfzfrpdtcn0b0lwk9aoegam7hxhb2y9zcfny17a15wohu1o3f3g8e2p7zseeds0shwifuqe0ue5acwbwmvjrk3v09jawxqwm230wjb03ft16cf2pgbv0u99g165m2h26hss5uvg81qj1ryzobbtjqg9biegag17j2fmabgm52u4wzduhjg9dyku8i3kzzljd2coa134pgjb22up9tdu8gk1mn8a76isqw2d9ph6w7jr1lbcv22xho0nod9zinkrfkwd1uu1eywiomwmi114whxfbto3w13bhq2ex4w44f270oewg6r86kbwt95o1layvjmc7uoaf5niadlrr6gu62yrhdioaqitw46jv7a5heyihdw0sw2vxq9l6j7chwksdloubrd3f39v1w1jxyqxpssi6aw1rrh2er7owzh3zjqv7zefqh0uvfto5be7mzqh9u92e8ccy0ynof0jg7lbeod6ugj3ahvaherqiuebe1bsy43v44xq66ailmd65ikluv5xd7cqpnrwddwx1024zbf37xgagrzvr7d1vqp5xo57h788swt7oblt2d8up0vtavr1lcf7xnw0ezxonwmc4di7aeosnl0zupaycjf9mmlw7rctu7xecch52xe71mmy3f54glfbmncim3zkbwfs983dial8vk4pqlvdgdu7fp2ut0pquekvci8u2d9a1yg20s4iqip8stf0bd0pgj4u6dvowikbnsucat1x50vqc3hwzu7wj72qnau2awo8npmg9es76s0021ixmotr4rjj3efcn1lyb9bjh5cxo88dq2nshwfiqapmufg2bj3liesjqvqew68a1t4s4fparloxpg70p2lhjap1ufqjm1qfxiyp46si1u1tnfeuibcw0i6jw02iybxahewitezoc3h3vbjsw589ipz3erjudtzbcydnrc9cgui4f669qf7gtqc2fvqxssw01xf9t82t72w7hxxqqcipboxp79vg5jwwws991ercb2n23g4az03gd5dktlq9e4ybym349jumrmxeo880ralz9wq5az9jfzl6p2hexw9puaes8gi3fekygf430h0mab5xupwbq3qz554xq921udwtfaw1o7w8dzelwtq7i6rrnxq150xml9dsfz1xlllm2cug81y4jfjfymc5ekz49tg2ftj36b1psqgkknmwtwt434btva67lrrqzmyel21zyp2u6vdatogba7gzo30hxfaajeunakaj3kvcp7co34d9jv2kf9ruow6p12ldyt7ghpam0m98yc23qnlf7xmxh55786o1n5wgcjjfib3gt1fqbkbmrjggpt81eu46w100kcoqgvz8nv4675zs45fo6j4s76ok0v0tla8kjzsotsukwimc8lit0235i8tohb7vl8x429ghhs5sq6rn62uvpuov9jlx76r6v74hr9tevoowpb5uxc1g4s1fmeqo9749lt2lfbibfgwcq96oy5qok5c5la21mzd5j8m8b4a3r9lqca3455nunypvd69h2q0j7hjs7i2om5isoav9eay3mnhoxwp5nmulukco217ah0z2fwi2qhr36lftc0tno33hkm13dbffzp9xfutqxu3zd7j34war75oyy89wm2pgld3wxrk6yey8myi62h9mh5fl5h49f9v4w9w17clz2ew52y0yvtuznjmry7411e47vkfh9k231k96t27kqbl92ug1wcmh8inkjc4rd5bepq7484t1d5ft6domc89vzz8wblbpn4mozhk6ace54mqx0famjxf71tremihs2hyr9b0yf6ni81p0z590ecyjiqqapehlmkjv8v79pmwpsqexkbyycuusp7vod0ptbzfs7fkh45k3ghypf5o6e9px87zlohjuv9ei8bmqwrz6s9x1gk0wcp8yq88vtee4ijm188xzlba05qyt56mzjyvjo5f4geghwxxt50dsif1qpx1rk56yafje6hubu0cbjp7a4oi8t1xui40glzra5psaow03rql161wgazd6hg05hdkr78ztqcyj79etjuax9n36xfjwq4wnluwtju5zgcc25mihfdvrbbd4q8u0ghe6dj93b0j37q33syg15g41ekxcyx64nvha5rehmk109vrlicpz52nq4l6eczg33z5dx1zgdy84zpg5codzsg3gcd3tapzi12p76zyrucsjzl0p9cxjn8fg35lwwfiqa3bdmv1duk36w5p9c229vx6jiv874r9jvs833lb3kt95o4vllkzjg61ewwsdzbb1e9k8v8hz8ef3nb8tb60uz4rdeeq8wfogzl5jrz9nbntc5lxur96bfvvrals58ddu4o2764ah0hnw6njkswbqyg75zk12psesz63ob1v7z7t1gyfgijtbi56cfo0t146r598lk3zyt8ho20bt4ory54cysvgzqlfn8p790vane6j8oarhnphdsno0ig2ztwwel7m05k8y1a7c2iztfpxejdlgr4b4d38czapa7s9y1k93glykn1ochj0tmnzdlk0t01xrgfy09ek6v11c6en2dk63r4j6o8gu39zggr7ngwtuelh55v7md64e7rwr4ld9m55xw2e0yn7qvrlpcm7ho61vczcwacl37x5lcf6b6cvmfvh0ds9s7m82sbi37vxt7yvwoar838nf5bucm8ywe3fabhzykqgkqahm7yh7rili5fhrub3osuuxik3dvb86qw9mfvtlwmlaatmo08vdpk84eer9kf3s740theq2te9ug1uy5d04uxhvsnn21eh1ejat2opjvbamuho6fwbf5yeurz53xq3rr41y5kgv2q8d7in41x0xcmhqp7e6pximkzaboquwx64uevw2p594hp3nz6rvl0qhawekbv9sgyj15bj3msrnb0pii0dcxmlbekfedhsujsnhkuw48ojw3jcdcqnxhfm77zju33795m0gav2mcflsfuwx5ixu6kz3o08a3whet0j2r3cylycogdpr66vcvm89itei8kbibf7t7us7jmfz7koeu0spa857h50eydyqzs3txiyyvocisan2e40rfrfnoqx1dckbtfni142ump9jhc3564pw5abra82b7bcmi8aepqutcc9398i58vbmbng719awa04rvxdcju5zq1rlxitatwvhdlsischf68wvix0oaflttt8k5or85hgkzti2854ew10ap776mrs48zkezn81hlqdwh2cnacwbyp1y54a1xa2s65ooi29if9mfqy59m3av3p4xe5kyecgpx3syyvottqhdp2giszs4iune1ptzead3rxalyfif3l6wlwdzeeuuhoo3zox2wljm7t47yuqqr007nvi6ng48epb88rqiwo3jj848m8z2n0d5otdsl87isr3dzm4frx2g73hub3ky5zu458kfo63178rfpoiwldg9jgs5v9xi6bt85ufap1jvcomcesfcsovm6l6k73exqgclkz49nlfuea6ofj7g05wawxsldkoby7eazw1moonb8ix63irl7bmozn854kxv5tk5691ay7oor8wmrdf81zbcjkobip6e01tn8eb41h7ankp4e50kbqtjc62mmo4vvd5rgybkpd7j8l7ivhukv6ge0204g5me2c3g11lojf2je2gqdm5mex1nk861hv76gyg6r1azycoozmcgv90d170huepkz347c415z451vrir63ydgv474lswz2otnhrrcy1ebkf1yeay2wo683rma5yusedocpy8k650jezgcg8i9t9ph4mp0yaka8cruen13rq3zj79b87e5rzon1ke9gi94eunc2l6att0qqmkwnlrtepsyf46ttv3iiypf7j6regmye9abbwp6t9ssmkzs4ev7p8iplnovzzt9vmqvjad6obms32rnnq7j8wq2s13k2b9etcle99ikuojnvr1eo4jocmzf1udt3sedto3dk7pbixyar9mr2808hre7ndo2rfrc0ab782hg2fsoa 00:08:04.223 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:04.223 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:04.223 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:04.223 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:04.223 [2024-07-23 04:03:57.358574] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:04.223 [2024-07-23 04:03:57.358797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76544 ] 00:08:04.223 { 00:08:04.223 "subsystems": [ 00:08:04.223 { 00:08:04.223 "subsystem": "bdev", 00:08:04.223 "config": [ 00:08:04.223 { 00:08:04.223 "params": { 00:08:04.223 "trtype": "pcie", 00:08:04.223 "traddr": "0000:00:10.0", 00:08:04.223 "name": "Nvme0" 00:08:04.223 }, 00:08:04.223 "method": "bdev_nvme_attach_controller" 00:08:04.223 }, 00:08:04.223 { 00:08:04.223 "method": "bdev_wait_for_examine" 00:08:04.223 } 00:08:04.223 ] 00:08:04.223 } 00:08:04.223 ] 00:08:04.223 } 00:08:04.223 [2024-07-23 04:03:57.474813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.223 [2024-07-23 04:03:57.491992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.223 [2024-07-23 04:03:57.549325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.482 [2024-07-23 04:03:57.603766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.741  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:04.741 00:08:04.741 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:04.741 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:04.741 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:04.741 04:03:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:04.741 { 00:08:04.741 "subsystems": [ 00:08:04.741 { 00:08:04.741 "subsystem": "bdev", 00:08:04.741 "config": [ 00:08:04.741 { 00:08:04.741 "params": { 00:08:04.741 "trtype": "pcie", 00:08:04.741 "traddr": "0000:00:10.0", 00:08:04.741 "name": "Nvme0" 00:08:04.741 }, 00:08:04.741 "method": "bdev_nvme_attach_controller" 00:08:04.741 }, 00:08:04.741 { 00:08:04.741 "method": "bdev_wait_for_examine" 00:08:04.741 } 00:08:04.741 ] 00:08:04.741 } 00:08:04.741 ] 00:08:04.742 } 00:08:04.742 [2024-07-23 04:03:57.988262] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:04.742 [2024-07-23 04:03:57.988361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76552 ] 00:08:05.001 [2024-07-23 04:03:58.110620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.001 [2024-07-23 04:03:58.128978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.001 [2024-07-23 04:03:58.195948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.001 [2024-07-23 04:03:58.252545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:05.260  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:05.260 00:08:05.260 04:03:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:05.260 ************************************ 00:08:05.260 END TEST dd_rw_offset 00:08:05.260 ************************************ 00:08:05.261 04:03:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ we9tsg4wflu0yq58al482u8k63k9q33mfo1i5zsxmtpgidqq9dnq6euwzwz097hfzfrpdtcn0b0lwk9aoegam7hxhb2y9zcfny17a15wohu1o3f3g8e2p7zseeds0shwifuqe0ue5acwbwmvjrk3v09jawxqwm230wjb03ft16cf2pgbv0u99g165m2h26hss5uvg81qj1ryzobbtjqg9biegag17j2fmabgm52u4wzduhjg9dyku8i3kzzljd2coa134pgjb22up9tdu8gk1mn8a76isqw2d9ph6w7jr1lbcv22xho0nod9zinkrfkwd1uu1eywiomwmi114whxfbto3w13bhq2ex4w44f270oewg6r86kbwt95o1layvjmc7uoaf5niadlrr6gu62yrhdioaqitw46jv7a5heyihdw0sw2vxq9l6j7chwksdloubrd3f39v1w1jxyqxpssi6aw1rrh2er7owzh3zjqv7zefqh0uvfto5be7mzqh9u92e8ccy0ynof0jg7lbeod6ugj3ahvaherqiuebe1bsy43v44xq66ailmd65ikluv5xd7cqpnrwddwx1024zbf37xgagrzvr7d1vqp5xo57h788swt7oblt2d8up0vtavr1lcf7xnw0ezxonwmc4di7aeosnl0zupaycjf9mmlw7rctu7xecch52xe71mmy3f54glfbmncim3zkbwfs983dial8vk4pqlvdgdu7fp2ut0pquekvci8u2d9a1yg20s4iqip8stf0bd0pgj4u6dvowikbnsucat1x50vqc3hwzu7wj72qnau2awo8npmg9es76s0021ixmotr4rjj3efcn1lyb9bjh5cxo88dq2nshwfiqapmufg2bj3liesjqvqew68a1t4s4fparloxpg70p2lhjap1ufqjm1qfxiyp46si1u1tnfeuibcw0i6jw02iybxahewitezoc3h3vbjsw589ipz3erjudtzbcydnrc9cgui4f669qf7gtqc2fvqxssw01xf9t82t72w7hxxqqcipboxp79vg5jwwws991ercb2n23g4az03gd5dktlq9e4ybym349jumrmxeo880ralz9wq5az9jfzl6p2hexw9puaes8gi3fekygf430h0mab5xupwbq3qz554xq921udwtfaw1o7w8dzelwtq7i6rrnxq150xml9dsfz1xlllm2cug81y4jfjfymc5ekz49tg2ftj36b1psqgkknmwtwt434btva67lrrqzmyel21zyp2u6vdatogba7gzo30hxfaajeunakaj3kvcp7co34d9jv2kf9ruow6p12ldyt7ghpam0m98yc23qnlf7xmxh55786o1n5wgcjjfib3gt1fqbkbmrjggpt81eu46w100kcoqgvz8nv4675zs45fo6j4s76ok0v0tla8kjzsotsukwimc8lit0235i8tohb7vl8x429ghhs5sq6rn62uvpuov9jlx76r6v74hr9tevoowpb5uxc1g4s1fmeqo9749lt2lfbibfgwcq96oy5qok5c5la21mzd5j8m8b4a3r9lqca3455nunypvd69h2q0j7hjs7i2om5isoav9eay3mnhoxwp5nmulukco217ah0z2fwi2qhr36lftc0tno33hkm13dbffzp9xfutqxu3zd7j34war75oyy89wm2pgld3wxrk6yey8myi62h9mh5fl5h49f9v4w9w17clz2ew52y0yvtuznjmry7411e47vkfh9k231k96t27kqbl92ug1wcmh8inkjc4rd5bepq7484t1d5ft6domc89vzz8wblbpn4mozhk6ace54mqx0famjxf71tremihs2hyr9b0yf6ni81p0z590ecyjiqqapehlmkjv8v79pmwpsqexkbyycuusp7vod0ptbzfs7fkh45k3ghypf5o6e9px87zlohjuv9ei8bmqwrz6s9x1gk0wcp8yq88vtee4ijm188xzlba05qyt56mzjyvjo5f4geghwxxt50dsif1qpx1rk56yafje6hubu0cbjp7a4oi8t1xui40glzra5psaow03rql161wgazd6hg05hdkr78ztqcyj79etjuax9n36xfjwq4wnluwtju5zgcc25mihfdvrbbd4q8u0ghe6dj93b0j37q33syg15g41ekxcyx64nvha5rehmk109vrlicpz52nq4l6eczg33z5dx1zgdy84zpg5codzsg3gcd3tapzi12p76zyrucsjzl0p9cxjn8fg35lwwfiqa3bdmv1duk36w5p9c229vx6jiv874r9jvs833lb3kt95o4vllkzjg61ewwsdzbb1e9k8v8hz8ef3nb8tb60uz4rdeeq8wfogzl5jrz9nbntc5lxur96bfvvrals58ddu4o2764ah0hnw6njkswbqyg75zk12psesz63ob1v7z7t1gyfgijtbi56cfo0t146r598lk3zyt8ho20bt4ory54cysvgzqlfn8p790vane6j8oarhnphdsno0ig2ztwwel7m05k8y1a7c2iztfpxejdlgr4b4d38czapa7s9y1k93glykn1ochj0tmnzdlk0t01xrgfy09ek6v11c6en2dk63r4j6o8gu39zggr7ngwtuelh55v7md64e7rwr4ld9m55xw2e0yn7qvrlpcm7ho61vczcwacl37x5lcf6b6cvmfvh0ds9s7m82sbi37vxt7yvwoar838nf5bucm8ywe3fabhzykqgkqahm7yh7rili5fhrub3osuuxik3dvb86qw9mfvtlwmlaatmo08vdpk84eer9kf3s740theq2te9ug1uy5d04uxhvsnn21eh1ejat2opjvbamuho6fwbf5yeurz53xq3rr41y5kgv2q8d7in41x0xcmhqp7e6pximkzaboquwx64uevw2p594hp3nz6rvl0qhawekbv9sgyj15bj3msrnb0pii0dcxmlbekfedhsujsnhkuw48ojw3jcdcqnxhfm77zju33795m0gav2mcflsfuwx5ixu6kz3o08a3whet0j2r3cylycogdpr66vcvm89itei8kbibf7t7us7jmfz7koeu0spa857h50eydyqzs3txiyyvocisan2e40rfrfnoqx1dckbtfni142ump9jhc3564pw5abra82b7bcmi8aepqutcc9398i58vbmbng719awa04rvxdcju5zq1rlxitatwvhdlsischf68wvix0oaflttt8k5or85hgkzti2854ew10ap776mrs48zkezn81hlqdwh2cnacwbyp1y54a1xa2s65ooi29if9mfqy59m3av3p4xe5kyecgpx3syyvottqhdp2giszs4iune1ptzead3rxalyfif3l6wlwdzeeuuhoo3zox2wljm7t47yuqqr007nvi6ng48epb88rqiwo3jj848m8z2n0d5otdsl87isr3dzm4frx2g73hub3ky5zu458kfo63178rfpoiwldg9jgs5v9xi6bt85ufap1jvcomcesfcsovm6l6k73exqgclkz49nlfuea6ofj7g05wawxsldkoby7eazw1moonb8ix63irl7bmozn854kxv5tk5691ay7oor8wmrdf81zbcjkobip6e01tn8eb41h7ankp4e50kbqtjc62mmo4vvd5rgybkpd7j8l7ivhukv6ge0204g5me2c3g11lojf2je2gqdm5mex1nk861hv76gyg6r1azycoozmcgv90d170huepkz347c415z451vrir63ydgv474lswz2otnhrrcy1ebkf1yeay2wo683rma5yusedocpy8k650jezgcg8i9t9ph4mp0yaka8cruen13rq3zj79b87e5rzon1ke9gi94eunc2l6att0qqmkwnlrtepsyf46ttv3iiypf7j6regmye9abbwp6t9ssmkzs4ev7p8iplnovzzt9vmqvjad6obms32rnnq7j8wq2s13k2b9etcle99ikuojnvr1eo4jocmzf1udt3sedto3dk7pbixyar9mr2808hre7ndo2rfrc0ab782hg2fsoa == \w\e\9\t\s\g\4\w\f\l\u\0\y\q\5\8\a\l\4\8\2\u\8\k\6\3\k\9\q\3\3\m\f\o\1\i\5\z\s\x\m\t\p\g\i\d\q\q\9\d\n\q\6\e\u\w\z\w\z\0\9\7\h\f\z\f\r\p\d\t\c\n\0\b\0\l\w\k\9\a\o\e\g\a\m\7\h\x\h\b\2\y\9\z\c\f\n\y\1\7\a\1\5\w\o\h\u\1\o\3\f\3\g\8\e\2\p\7\z\s\e\e\d\s\0\s\h\w\i\f\u\q\e\0\u\e\5\a\c\w\b\w\m\v\j\r\k\3\v\0\9\j\a\w\x\q\w\m\2\3\0\w\j\b\0\3\f\t\1\6\c\f\2\p\g\b\v\0\u\9\9\g\1\6\5\m\2\h\2\6\h\s\s\5\u\v\g\8\1\q\j\1\r\y\z\o\b\b\t\j\q\g\9\b\i\e\g\a\g\1\7\j\2\f\m\a\b\g\m\5\2\u\4\w\z\d\u\h\j\g\9\d\y\k\u\8\i\3\k\z\z\l\j\d\2\c\o\a\1\3\4\p\g\j\b\2\2\u\p\9\t\d\u\8\g\k\1\m\n\8\a\7\6\i\s\q\w\2\d\9\p\h\6\w\7\j\r\1\l\b\c\v\2\2\x\h\o\0\n\o\d\9\z\i\n\k\r\f\k\w\d\1\u\u\1\e\y\w\i\o\m\w\m\i\1\1\4\w\h\x\f\b\t\o\3\w\1\3\b\h\q\2\e\x\4\w\4\4\f\2\7\0\o\e\w\g\6\r\8\6\k\b\w\t\9\5\o\1\l\a\y\v\j\m\c\7\u\o\a\f\5\n\i\a\d\l\r\r\6\g\u\6\2\y\r\h\d\i\o\a\q\i\t\w\4\6\j\v\7\a\5\h\e\y\i\h\d\w\0\s\w\2\v\x\q\9\l\6\j\7\c\h\w\k\s\d\l\o\u\b\r\d\3\f\3\9\v\1\w\1\j\x\y\q\x\p\s\s\i\6\a\w\1\r\r\h\2\e\r\7\o\w\z\h\3\z\j\q\v\7\z\e\f\q\h\0\u\v\f\t\o\5\b\e\7\m\z\q\h\9\u\9\2\e\8\c\c\y\0\y\n\o\f\0\j\g\7\l\b\e\o\d\6\u\g\j\3\a\h\v\a\h\e\r\q\i\u\e\b\e\1\b\s\y\4\3\v\4\4\x\q\6\6\a\i\l\m\d\6\5\i\k\l\u\v\5\x\d\7\c\q\p\n\r\w\d\d\w\x\1\0\2\4\z\b\f\3\7\x\g\a\g\r\z\v\r\7\d\1\v\q\p\5\x\o\5\7\h\7\8\8\s\w\t\7\o\b\l\t\2\d\8\u\p\0\v\t\a\v\r\1\l\c\f\7\x\n\w\0\e\z\x\o\n\w\m\c\4\d\i\7\a\e\o\s\n\l\0\z\u\p\a\y\c\j\f\9\m\m\l\w\7\r\c\t\u\7\x\e\c\c\h\5\2\x\e\7\1\m\m\y\3\f\5\4\g\l\f\b\m\n\c\i\m\3\z\k\b\w\f\s\9\8\3\d\i\a\l\8\v\k\4\p\q\l\v\d\g\d\u\7\f\p\2\u\t\0\p\q\u\e\k\v\c\i\8\u\2\d\9\a\1\y\g\2\0\s\4\i\q\i\p\8\s\t\f\0\b\d\0\p\g\j\4\u\6\d\v\o\w\i\k\b\n\s\u\c\a\t\1\x\5\0\v\q\c\3\h\w\z\u\7\w\j\7\2\q\n\a\u\2\a\w\o\8\n\p\m\g\9\e\s\7\6\s\0\0\2\1\i\x\m\o\t\r\4\r\j\j\3\e\f\c\n\1\l\y\b\9\b\j\h\5\c\x\o\8\8\d\q\2\n\s\h\w\f\i\q\a\p\m\u\f\g\2\b\j\3\l\i\e\s\j\q\v\q\e\w\6\8\a\1\t\4\s\4\f\p\a\r\l\o\x\p\g\7\0\p\2\l\h\j\a\p\1\u\f\q\j\m\1\q\f\x\i\y\p\4\6\s\i\1\u\1\t\n\f\e\u\i\b\c\w\0\i\6\j\w\0\2\i\y\b\x\a\h\e\w\i\t\e\z\o\c\3\h\3\v\b\j\s\w\5\8\9\i\p\z\3\e\r\j\u\d\t\z\b\c\y\d\n\r\c\9\c\g\u\i\4\f\6\6\9\q\f\7\g\t\q\c\2\f\v\q\x\s\s\w\0\1\x\f\9\t\8\2\t\7\2\w\7\h\x\x\q\q\c\i\p\b\o\x\p\7\9\v\g\5\j\w\w\w\s\9\9\1\e\r\c\b\2\n\2\3\g\4\a\z\0\3\g\d\5\d\k\t\l\q\9\e\4\y\b\y\m\3\4\9\j\u\m\r\m\x\e\o\8\8\0\r\a\l\z\9\w\q\5\a\z\9\j\f\z\l\6\p\2\h\e\x\w\9\p\u\a\e\s\8\g\i\3\f\e\k\y\g\f\4\3\0\h\0\m\a\b\5\x\u\p\w\b\q\3\q\z\5\5\4\x\q\9\2\1\u\d\w\t\f\a\w\1\o\7\w\8\d\z\e\l\w\t\q\7\i\6\r\r\n\x\q\1\5\0\x\m\l\9\d\s\f\z\1\x\l\l\l\m\2\c\u\g\8\1\y\4\j\f\j\f\y\m\c\5\e\k\z\4\9\t\g\2\f\t\j\3\6\b\1\p\s\q\g\k\k\n\m\w\t\w\t\4\3\4\b\t\v\a\6\7\l\r\r\q\z\m\y\e\l\2\1\z\y\p\2\u\6\v\d\a\t\o\g\b\a\7\g\z\o\3\0\h\x\f\a\a\j\e\u\n\a\k\a\j\3\k\v\c\p\7\c\o\3\4\d\9\j\v\2\k\f\9\r\u\o\w\6\p\1\2\l\d\y\t\7\g\h\p\a\m\0\m\9\8\y\c\2\3\q\n\l\f\7\x\m\x\h\5\5\7\8\6\o\1\n\5\w\g\c\j\j\f\i\b\3\g\t\1\f\q\b\k\b\m\r\j\g\g\p\t\8\1\e\u\4\6\w\1\0\0\k\c\o\q\g\v\z\8\n\v\4\6\7\5\z\s\4\5\f\o\6\j\4\s\7\6\o\k\0\v\0\t\l\a\8\k\j\z\s\o\t\s\u\k\w\i\m\c\8\l\i\t\0\2\3\5\i\8\t\o\h\b\7\v\l\8\x\4\2\9\g\h\h\s\5\s\q\6\r\n\6\2\u\v\p\u\o\v\9\j\l\x\7\6\r\6\v\7\4\h\r\9\t\e\v\o\o\w\p\b\5\u\x\c\1\g\4\s\1\f\m\e\q\o\9\7\4\9\l\t\2\l\f\b\i\b\f\g\w\c\q\9\6\o\y\5\q\o\k\5\c\5\l\a\2\1\m\z\d\5\j\8\m\8\b\4\a\3\r\9\l\q\c\a\3\4\5\5\n\u\n\y\p\v\d\6\9\h\2\q\0\j\7\h\j\s\7\i\2\o\m\5\i\s\o\a\v\9\e\a\y\3\m\n\h\o\x\w\p\5\n\m\u\l\u\k\c\o\2\1\7\a\h\0\z\2\f\w\i\2\q\h\r\3\6\l\f\t\c\0\t\n\o\3\3\h\k\m\1\3\d\b\f\f\z\p\9\x\f\u\t\q\x\u\3\z\d\7\j\3\4\w\a\r\7\5\o\y\y\8\9\w\m\2\p\g\l\d\3\w\x\r\k\6\y\e\y\8\m\y\i\6\2\h\9\m\h\5\f\l\5\h\4\9\f\9\v\4\w\9\w\1\7\c\l\z\2\e\w\5\2\y\0\y\v\t\u\z\n\j\m\r\y\7\4\1\1\e\4\7\v\k\f\h\9\k\2\3\1\k\9\6\t\2\7\k\q\b\l\9\2\u\g\1\w\c\m\h\8\i\n\k\j\c\4\r\d\5\b\e\p\q\7\4\8\4\t\1\d\5\f\t\6\d\o\m\c\8\9\v\z\z\8\w\b\l\b\p\n\4\m\o\z\h\k\6\a\c\e\5\4\m\q\x\0\f\a\m\j\x\f\7\1\t\r\e\m\i\h\s\2\h\y\r\9\b\0\y\f\6\n\i\8\1\p\0\z\5\9\0\e\c\y\j\i\q\q\a\p\e\h\l\m\k\j\v\8\v\7\9\p\m\w\p\s\q\e\x\k\b\y\y\c\u\u\s\p\7\v\o\d\0\p\t\b\z\f\s\7\f\k\h\4\5\k\3\g\h\y\p\f\5\o\6\e\9\p\x\8\7\z\l\o\h\j\u\v\9\e\i\8\b\m\q\w\r\z\6\s\9\x\1\g\k\0\w\c\p\8\y\q\8\8\v\t\e\e\4\i\j\m\1\8\8\x\z\l\b\a\0\5\q\y\t\5\6\m\z\j\y\v\j\o\5\f\4\g\e\g\h\w\x\x\t\5\0\d\s\i\f\1\q\p\x\1\r\k\5\6\y\a\f\j\e\6\h\u\b\u\0\c\b\j\p\7\a\4\o\i\8\t\1\x\u\i\4\0\g\l\z\r\a\5\p\s\a\o\w\0\3\r\q\l\1\6\1\w\g\a\z\d\6\h\g\0\5\h\d\k\r\7\8\z\t\q\c\y\j\7\9\e\t\j\u\a\x\9\n\3\6\x\f\j\w\q\4\w\n\l\u\w\t\j\u\5\z\g\c\c\2\5\m\i\h\f\d\v\r\b\b\d\4\q\8\u\0\g\h\e\6\d\j\9\3\b\0\j\3\7\q\3\3\s\y\g\1\5\g\4\1\e\k\x\c\y\x\6\4\n\v\h\a\5\r\e\h\m\k\1\0\9\v\r\l\i\c\p\z\5\2\n\q\4\l\6\e\c\z\g\3\3\z\5\d\x\1\z\g\d\y\8\4\z\p\g\5\c\o\d\z\s\g\3\g\c\d\3\t\a\p\z\i\1\2\p\7\6\z\y\r\u\c\s\j\z\l\0\p\9\c\x\j\n\8\f\g\3\5\l\w\w\f\i\q\a\3\b\d\m\v\1\d\u\k\3\6\w\5\p\9\c\2\2\9\v\x\6\j\i\v\8\7\4\r\9\j\v\s\8\3\3\l\b\3\k\t\9\5\o\4\v\l\l\k\z\j\g\6\1\e\w\w\s\d\z\b\b\1\e\9\k\8\v\8\h\z\8\e\f\3\n\b\8\t\b\6\0\u\z\4\r\d\e\e\q\8\w\f\o\g\z\l\5\j\r\z\9\n\b\n\t\c\5\l\x\u\r\9\6\b\f\v\v\r\a\l\s\5\8\d\d\u\4\o\2\7\6\4\a\h\0\h\n\w\6\n\j\k\s\w\b\q\y\g\7\5\z\k\1\2\p\s\e\s\z\6\3\o\b\1\v\7\z\7\t\1\g\y\f\g\i\j\t\b\i\5\6\c\f\o\0\t\1\4\6\r\5\9\8\l\k\3\z\y\t\8\h\o\2\0\b\t\4\o\r\y\5\4\c\y\s\v\g\z\q\l\f\n\8\p\7\9\0\v\a\n\e\6\j\8\o\a\r\h\n\p\h\d\s\n\o\0\i\g\2\z\t\w\w\e\l\7\m\0\5\k\8\y\1\a\7\c\2\i\z\t\f\p\x\e\j\d\l\g\r\4\b\4\d\3\8\c\z\a\p\a\7\s\9\y\1\k\9\3\g\l\y\k\n\1\o\c\h\j\0\t\m\n\z\d\l\k\0\t\0\1\x\r\g\f\y\0\9\e\k\6\v\1\1\c\6\e\n\2\d\k\6\3\r\4\j\6\o\8\g\u\3\9\z\g\g\r\7\n\g\w\t\u\e\l\h\5\5\v\7\m\d\6\4\e\7\r\w\r\4\l\d\9\m\5\5\x\w\2\e\0\y\n\7\q\v\r\l\p\c\m\7\h\o\6\1\v\c\z\c\w\a\c\l\3\7\x\5\l\c\f\6\b\6\c\v\m\f\v\h\0\d\s\9\s\7\m\8\2\s\b\i\3\7\v\x\t\7\y\v\w\o\a\r\8\3\8\n\f\5\b\u\c\m\8\y\w\e\3\f\a\b\h\z\y\k\q\g\k\q\a\h\m\7\y\h\7\r\i\l\i\5\f\h\r\u\b\3\o\s\u\u\x\i\k\3\d\v\b\8\6\q\w\9\m\f\v\t\l\w\m\l\a\a\t\m\o\0\8\v\d\p\k\8\4\e\e\r\9\k\f\3\s\7\4\0\t\h\e\q\2\t\e\9\u\g\1\u\y\5\d\0\4\u\x\h\v\s\n\n\2\1\e\h\1\e\j\a\t\2\o\p\j\v\b\a\m\u\h\o\6\f\w\b\f\5\y\e\u\r\z\5\3\x\q\3\r\r\4\1\y\5\k\g\v\2\q\8\d\7\i\n\4\1\x\0\x\c\m\h\q\p\7\e\6\p\x\i\m\k\z\a\b\o\q\u\w\x\6\4\u\e\v\w\2\p\5\9\4\h\p\3\n\z\6\r\v\l\0\q\h\a\w\e\k\b\v\9\s\g\y\j\1\5\b\j\3\m\s\r\n\b\0\p\i\i\0\d\c\x\m\l\b\e\k\f\e\d\h\s\u\j\s\n\h\k\u\w\4\8\o\j\w\3\j\c\d\c\q\n\x\h\f\m\7\7\z\j\u\3\3\7\9\5\m\0\g\a\v\2\m\c\f\l\s\f\u\w\x\5\i\x\u\6\k\z\3\o\0\8\a\3\w\h\e\t\0\j\2\r\3\c\y\l\y\c\o\g\d\p\r\6\6\v\c\v\m\8\9\i\t\e\i\8\k\b\i\b\f\7\t\7\u\s\7\j\m\f\z\7\k\o\e\u\0\s\p\a\8\5\7\h\5\0\e\y\d\y\q\z\s\3\t\x\i\y\y\v\o\c\i\s\a\n\2\e\4\0\r\f\r\f\n\o\q\x\1\d\c\k\b\t\f\n\i\1\4\2\u\m\p\9\j\h\c\3\5\6\4\p\w\5\a\b\r\a\8\2\b\7\b\c\m\i\8\a\e\p\q\u\t\c\c\9\3\9\8\i\5\8\v\b\m\b\n\g\7\1\9\a\w\a\0\4\r\v\x\d\c\j\u\5\z\q\1\r\l\x\i\t\a\t\w\v\h\d\l\s\i\s\c\h\f\6\8\w\v\i\x\0\o\a\f\l\t\t\t\8\k\5\o\r\8\5\h\g\k\z\t\i\2\8\5\4\e\w\1\0\a\p\7\7\6\m\r\s\4\8\z\k\e\z\n\8\1\h\l\q\d\w\h\2\c\n\a\c\w\b\y\p\1\y\5\4\a\1\x\a\2\s\6\5\o\o\i\2\9\i\f\9\m\f\q\y\5\9\m\3\a\v\3\p\4\x\e\5\k\y\e\c\g\p\x\3\s\y\y\v\o\t\t\q\h\d\p\2\g\i\s\z\s\4\i\u\n\e\1\p\t\z\e\a\d\3\r\x\a\l\y\f\i\f\3\l\6\w\l\w\d\z\e\e\u\u\h\o\o\3\z\o\x\2\w\l\j\m\7\t\4\7\y\u\q\q\r\0\0\7\n\v\i\6\n\g\4\8\e\p\b\8\8\r\q\i\w\o\3\j\j\8\4\8\m\8\z\2\n\0\d\5\o\t\d\s\l\8\7\i\s\r\3\d\z\m\4\f\r\x\2\g\7\3\h\u\b\3\k\y\5\z\u\4\5\8\k\f\o\6\3\1\7\8\r\f\p\o\i\w\l\d\g\9\j\g\s\5\v\9\x\i\6\b\t\8\5\u\f\a\p\1\j\v\c\o\m\c\e\s\f\c\s\o\v\m\6\l\6\k\7\3\e\x\q\g\c\l\k\z\4\9\n\l\f\u\e\a\6\o\f\j\7\g\0\5\w\a\w\x\s\l\d\k\o\b\y\7\e\a\z\w\1\m\o\o\n\b\8\i\x\6\3\i\r\l\7\b\m\o\z\n\8\5\4\k\x\v\5\t\k\5\6\9\1\a\y\7\o\o\r\8\w\m\r\d\f\8\1\z\b\c\j\k\o\b\i\p\6\e\0\1\t\n\8\e\b\4\1\h\7\a\n\k\p\4\e\5\0\k\b\q\t\j\c\6\2\m\m\o\4\v\v\d\5\r\g\y\b\k\p\d\7\j\8\l\7\i\v\h\u\k\v\6\g\e\0\2\0\4\g\5\m\e\2\c\3\g\1\1\l\o\j\f\2\j\e\2\g\q\d\m\5\m\e\x\1\n\k\8\6\1\h\v\7\6\g\y\g\6\r\1\a\z\y\c\o\o\z\m\c\g\v\9\0\d\1\7\0\h\u\e\p\k\z\3\4\7\c\4\1\5\z\4\5\1\v\r\i\r\6\3\y\d\g\v\4\7\4\l\s\w\z\2\o\t\n\h\r\r\c\y\1\e\b\k\f\1\y\e\a\y\2\w\o\6\8\3\r\m\a\5\y\u\s\e\d\o\c\p\y\8\k\6\5\0\j\e\z\g\c\g\8\i\9\t\9\p\h\4\m\p\0\y\a\k\a\8\c\r\u\e\n\1\3\r\q\3\z\j\7\9\b\8\7\e\5\r\z\o\n\1\k\e\9\g\i\9\4\e\u\n\c\2\l\6\a\t\t\0\q\q\m\k\w\n\l\r\t\e\p\s\y\f\4\6\t\t\v\3\i\i\y\p\f\7\j\6\r\e\g\m\y\e\9\a\b\b\w\p\6\t\9\s\s\m\k\z\s\4\e\v\7\p\8\i\p\l\n\o\v\z\z\t\9\v\m\q\v\j\a\d\6\o\b\m\s\3\2\r\n\n\q\7\j\8\w\q\2\s\1\3\k\2\b\9\e\t\c\l\e\9\9\i\k\u\o\j\n\v\r\1\e\o\4\j\o\c\m\z\f\1\u\d\t\3\s\e\d\t\o\3\d\k\7\p\b\i\x\y\a\r\9\m\r\2\8\0\8\h\r\e\7\n\d\o\2\r\f\r\c\0\a\b\7\8\2\h\g\2\f\s\o\a ]] 00:08:05.261 00:08:05.261 real 0m1.299s 00:08:05.261 user 0m0.861s 00:08:05.261 sys 0m0.601s 00:08:05.261 04:03:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.261 04:03:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.520 04:03:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 [2024-07-23 04:03:58.654530] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:05.520 [2024-07-23 04:03:58.654634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76587 ] 00:08:05.520 { 00:08:05.520 "subsystems": [ 00:08:05.520 { 00:08:05.520 "subsystem": "bdev", 00:08:05.520 "config": [ 00:08:05.520 { 00:08:05.520 "params": { 00:08:05.520 "trtype": "pcie", 00:08:05.520 "traddr": "0000:00:10.0", 00:08:05.520 "name": "Nvme0" 00:08:05.520 }, 00:08:05.520 "method": "bdev_nvme_attach_controller" 00:08:05.520 }, 00:08:05.520 { 00:08:05.520 "method": "bdev_wait_for_examine" 00:08:05.520 } 00:08:05.520 ] 00:08:05.520 } 00:08:05.520 ] 00:08:05.520 } 00:08:05.520 [2024-07-23 04:03:58.771659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.520 [2024-07-23 04:03:58.785873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.520 [2024-07-23 04:03:58.839941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.779 [2024-07-23 04:03:58.890551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.038  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:06.038 00:08:06.038 04:03:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.038 ************************************ 00:08:06.038 END TEST spdk_dd_basic_rw 00:08:06.038 ************************************ 00:08:06.038 00:08:06.038 real 0m18.512s 00:08:06.038 user 0m13.152s 00:08:06.038 sys 0m6.899s 00:08:06.038 04:03:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.038 04:03:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.038 04:03:59 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:06.038 04:03:59 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:06.038 04:03:59 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.038 04:03:59 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.038 04:03:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:06.038 ************************************ 00:08:06.038 START TEST spdk_dd_posix 00:08:06.038 ************************************ 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:06.038 * Looking for test storage... 00:08:06.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:06.038 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:06.039 * First test run, liburing in use 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:06.039 ************************************ 00:08:06.039 START TEST dd_flag_append 00:08:06.039 ************************************ 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=0qpgh9hdm8cv9jg0eqrjslfrb8bvjfwd 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=skdwxu25d79oh5ci3glov6mct93xbls5 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 0qpgh9hdm8cv9jg0eqrjslfrb8bvjfwd 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s skdwxu25d79oh5ci3glov6mct93xbls5 00:08:06.039 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:06.297 [2024-07-23 04:03:59.421336] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:06.297 [2024-07-23 04:03:59.421437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76651 ] 00:08:06.297 [2024-07-23 04:03:59.542800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.297 [2024-07-23 04:03:59.559036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.297 [2024-07-23 04:03:59.622943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.556 [2024-07-23 04:03:59.677631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.556  Copying: 32/32 [B] (average 31 kBps) 00:08:06.556 00:08:06.556 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ skdwxu25d79oh5ci3glov6mct93xbls50qpgh9hdm8cv9jg0eqrjslfrb8bvjfwd == \s\k\d\w\x\u\2\5\d\7\9\o\h\5\c\i\3\g\l\o\v\6\m\c\t\9\3\x\b\l\s\5\0\q\p\g\h\9\h\d\m\8\c\v\9\j\g\0\e\q\r\j\s\l\f\r\b\8\b\v\j\f\w\d ]] 00:08:06.556 00:08:06.556 real 0m0.534s 00:08:06.556 user 0m0.277s 00:08:06.556 sys 0m0.275s 00:08:06.556 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.556 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:06.556 ************************************ 00:08:06.556 END TEST dd_flag_append 00:08:06.556 ************************************ 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:06.815 ************************************ 00:08:06.815 START TEST dd_flag_directory 00:08:06.815 ************************************ 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.815 04:03:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.815 [2024-07-23 04:04:00.011466] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:06.815 [2024-07-23 04:04:00.011557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76674 ] 00:08:06.815 [2024-07-23 04:04:00.132165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.815 [2024-07-23 04:04:00.151764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.074 [2024-07-23 04:04:00.220578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.074 [2024-07-23 04:04:00.277751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.074 [2024-07-23 04:04:00.308742] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:07.074 [2024-07-23 04:04:00.308795] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:07.074 [2024-07-23 04:04:00.308814] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.332 [2024-07-23 04:04:00.421376] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:07.332 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.333 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.333 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.333 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.333 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.333 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.333 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.333 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.333 04:04:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:07.333 [2024-07-23 04:04:00.568129] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:07.333 [2024-07-23 04:04:00.568214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76689 ] 00:08:07.592 [2024-07-23 04:04:00.682170] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:07.592 [2024-07-23 04:04:00.699652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.592 [2024-07-23 04:04:00.751598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.592 [2024-07-23 04:04:00.807315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.592 [2024-07-23 04:04:00.836503] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:07.592 [2024-07-23 04:04:00.836561] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:07.592 [2024-07-23 04:04:00.836575] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.851 [2024-07-23 04:04:00.940517] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:07.851 ************************************ 00:08:07.851 END TEST dd_flag_directory 00:08:07.851 ************************************ 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.851 00:08:07.851 real 0m1.077s 00:08:07.851 user 0m0.583s 00:08:07.851 sys 0m0.283s 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:07.851 ************************************ 00:08:07.851 START TEST dd_flag_nofollow 00:08:07.851 ************************************ 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.851 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.851 [2024-07-23 04:04:01.144676] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:07.851 [2024-07-23 04:04:01.144768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76712 ] 00:08:08.110 [2024-07-23 04:04:01.265324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.110 [2024-07-23 04:04:01.283974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.110 [2024-07-23 04:04:01.342733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.110 [2024-07-23 04:04:01.394855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.110 [2024-07-23 04:04:01.421418] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:08.110 [2024-07-23 04:04:01.421475] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:08.110 [2024-07-23 04:04:01.421490] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.368 [2024-07-23 04:04:01.527949] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.368 04:04:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.368 [2024-07-23 04:04:01.676992] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:08.368 [2024-07-23 04:04:01.677083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76727 ] 00:08:08.627 [2024-07-23 04:04:01.797590] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.627 [2024-07-23 04:04:01.817317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.627 [2024-07-23 04:04:01.890713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.627 [2024-07-23 04:04:01.941270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.885 [2024-07-23 04:04:01.969123] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:08.885 [2024-07-23 04:04:01.969188] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:08.885 [2024-07-23 04:04:01.969220] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.885 [2024-07-23 04:04:02.085020] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:08.885 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.143 [2024-07-23 04:04:02.241620] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:09.143 [2024-07-23 04:04:02.241717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76729 ] 00:08:09.143 [2024-07-23 04:04:02.364496] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.143 [2024-07-23 04:04:02.378153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.143 [2024-07-23 04:04:02.436109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.401 [2024-07-23 04:04:02.489299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.401  Copying: 512/512 [B] (average 500 kBps) 00:08:09.401 00:08:09.401 ************************************ 00:08:09.401 END TEST dd_flag_nofollow 00:08:09.401 ************************************ 00:08:09.402 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ yroawzk1hvvqj3yhc7t6dqa3n1mjhs1s8ts46yw1cuasoyfs21ktwtp74uas28ut4ocgwy8s6ikz8toax72tcfmsavictqa0uondoezauhiyl9sft4dhx4qlftnreamkaubrs6ydb1kax7o2fjur3bd9d2k42frmxow4i2p890nbfcfs0peu3wfmjxrj9wj7uisajvefkpq9kf7ek7vn35mokylu3e24yrtj8wo1v5703lgmnujrdhj88juh8bc77kwj21087luchztvpxq1d5dhcfsjjjgd7audzvz1hzhriv4kb73jfo0ht57uo6ifhhdedl1eus40ao35cskcy2apzxwo98id2ccr2qfxsui94uuedzc7fdkx1ol7xwef6zpsfqutbpxlmplvwjhdpuacuys70bvaeqa6g534xdgr6h59ceqb210jz3iyb10xsxbv26qdqfoutvsg940foi9t16iy4ppdimst0qlbyr5us7w83po01ff7g76ckv9r == \y\r\o\a\w\z\k\1\h\v\v\q\j\3\y\h\c\7\t\6\d\q\a\3\n\1\m\j\h\s\1\s\8\t\s\4\6\y\w\1\c\u\a\s\o\y\f\s\2\1\k\t\w\t\p\7\4\u\a\s\2\8\u\t\4\o\c\g\w\y\8\s\6\i\k\z\8\t\o\a\x\7\2\t\c\f\m\s\a\v\i\c\t\q\a\0\u\o\n\d\o\e\z\a\u\h\i\y\l\9\s\f\t\4\d\h\x\4\q\l\f\t\n\r\e\a\m\k\a\u\b\r\s\6\y\d\b\1\k\a\x\7\o\2\f\j\u\r\3\b\d\9\d\2\k\4\2\f\r\m\x\o\w\4\i\2\p\8\9\0\n\b\f\c\f\s\0\p\e\u\3\w\f\m\j\x\r\j\9\w\j\7\u\i\s\a\j\v\e\f\k\p\q\9\k\f\7\e\k\7\v\n\3\5\m\o\k\y\l\u\3\e\2\4\y\r\t\j\8\w\o\1\v\5\7\0\3\l\g\m\n\u\j\r\d\h\j\8\8\j\u\h\8\b\c\7\7\k\w\j\2\1\0\8\7\l\u\c\h\z\t\v\p\x\q\1\d\5\d\h\c\f\s\j\j\j\g\d\7\a\u\d\z\v\z\1\h\z\h\r\i\v\4\k\b\7\3\j\f\o\0\h\t\5\7\u\o\6\i\f\h\h\d\e\d\l\1\e\u\s\4\0\a\o\3\5\c\s\k\c\y\2\a\p\z\x\w\o\9\8\i\d\2\c\c\r\2\q\f\x\s\u\i\9\4\u\u\e\d\z\c\7\f\d\k\x\1\o\l\7\x\w\e\f\6\z\p\s\f\q\u\t\b\p\x\l\m\p\l\v\w\j\h\d\p\u\a\c\u\y\s\7\0\b\v\a\e\q\a\6\g\5\3\4\x\d\g\r\6\h\5\9\c\e\q\b\2\1\0\j\z\3\i\y\b\1\0\x\s\x\b\v\2\6\q\d\q\f\o\u\t\v\s\g\9\4\0\f\o\i\9\t\1\6\i\y\4\p\p\d\i\m\s\t\0\q\l\b\y\r\5\u\s\7\w\8\3\p\o\0\1\f\f\7\g\7\6\c\k\v\9\r ]] 00:08:09.402 00:08:09.402 real 0m1.647s 00:08:09.402 user 0m0.881s 00:08:09.402 sys 0m0.573s 00:08:09.402 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.402 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:09.661 ************************************ 00:08:09.661 START TEST dd_flag_noatime 00:08:09.661 ************************************ 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721707442 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721707442 00:08:09.661 04:04:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:10.597 04:04:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.597 [2024-07-23 04:04:03.867588] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:10.597 [2024-07-23 04:04:03.867922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76777 ] 00:08:10.856 [2024-07-23 04:04:03.989600] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.856 [2024-07-23 04:04:04.010307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.856 [2024-07-23 04:04:04.103682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.856 [2024-07-23 04:04:04.161747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.122  Copying: 512/512 [B] (average 500 kBps) 00:08:11.122 00:08:11.122 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.122 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721707442 )) 00:08:11.123 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.123 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721707442 )) 00:08:11.123 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.381 [2024-07-23 04:04:04.465701] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:11.381 [2024-07-23 04:04:04.465794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76791 ] 00:08:11.381 [2024-07-23 04:04:04.589524] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.381 [2024-07-23 04:04:04.605521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.381 [2024-07-23 04:04:04.702435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.639 [2024-07-23 04:04:04.760487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.897  Copying: 512/512 [B] (average 500 kBps) 00:08:11.897 00:08:11.897 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.897 ************************************ 00:08:11.897 END TEST dd_flag_noatime 00:08:11.897 ************************************ 00:08:11.897 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721707444 )) 00:08:11.897 00:08:11.897 real 0m2.208s 00:08:11.897 user 0m0.675s 00:08:11.897 sys 0m0.565s 00:08:11.897 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.897 04:04:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:11.897 ************************************ 00:08:11.897 START TEST dd_flags_misc 00:08:11.897 ************************************ 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.897 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:11.897 [2024-07-23 04:04:05.093732] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:11.897 [2024-07-23 04:04:05.093823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76819 ] 00:08:11.897 [2024-07-23 04:04:05.214468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.897 [2024-07-23 04:04:05.233061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.155 [2024-07-23 04:04:05.311594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.155 [2024-07-23 04:04:05.367386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.414  Copying: 512/512 [B] (average 500 kBps) 00:08:12.414 00:08:12.414 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8n2ejo80ca0eulpii03hzfzz6n7zh9hxbui4grl28dvgfolslnpp8jo61z7jm7ub0n9k6yjeeup0j8o3wglzlqjuu9oz8jtxipg3pkqsx274xbzo6xzkwqbdqyao4xsmr34u0xzvy9dpb6elwnxqxye09sfemgodbu8g07iqsvu3dw221vxyqtrg05rb190gewom8rjtta7zuluu5tbj9iw1aavi0rwuq26lr48ir8tvvu1pukh91wbr1tz0a8qw3seqvm8eujavzail858671o3bgvw0gl6jh4oiu2ikin3x7czsjnvcox6zrjda4yg7h2me3aae49wk8pjxrz45zfbc56xo1py1xjddn921hj422svwzkbvqohvtv6wa4dyjpf8gi0pgi0ze1h0jplmxio7xhwcqtawabelhrpkic9yh1zmlgi181dq69sh1vx4ukvtdm2zrnf7qycmyimotn1bp9jlhfbaqx8hn1j7rqcxq1fdh7pdgggklo9on7z == \8\n\2\e\j\o\8\0\c\a\0\e\u\l\p\i\i\0\3\h\z\f\z\z\6\n\7\z\h\9\h\x\b\u\i\4\g\r\l\2\8\d\v\g\f\o\l\s\l\n\p\p\8\j\o\6\1\z\7\j\m\7\u\b\0\n\9\k\6\y\j\e\e\u\p\0\j\8\o\3\w\g\l\z\l\q\j\u\u\9\o\z\8\j\t\x\i\p\g\3\p\k\q\s\x\2\7\4\x\b\z\o\6\x\z\k\w\q\b\d\q\y\a\o\4\x\s\m\r\3\4\u\0\x\z\v\y\9\d\p\b\6\e\l\w\n\x\q\x\y\e\0\9\s\f\e\m\g\o\d\b\u\8\g\0\7\i\q\s\v\u\3\d\w\2\2\1\v\x\y\q\t\r\g\0\5\r\b\1\9\0\g\e\w\o\m\8\r\j\t\t\a\7\z\u\l\u\u\5\t\b\j\9\i\w\1\a\a\v\i\0\r\w\u\q\2\6\l\r\4\8\i\r\8\t\v\v\u\1\p\u\k\h\9\1\w\b\r\1\t\z\0\a\8\q\w\3\s\e\q\v\m\8\e\u\j\a\v\z\a\i\l\8\5\8\6\7\1\o\3\b\g\v\w\0\g\l\6\j\h\4\o\i\u\2\i\k\i\n\3\x\7\c\z\s\j\n\v\c\o\x\6\z\r\j\d\a\4\y\g\7\h\2\m\e\3\a\a\e\4\9\w\k\8\p\j\x\r\z\4\5\z\f\b\c\5\6\x\o\1\p\y\1\x\j\d\d\n\9\2\1\h\j\4\2\2\s\v\w\z\k\b\v\q\o\h\v\t\v\6\w\a\4\d\y\j\p\f\8\g\i\0\p\g\i\0\z\e\1\h\0\j\p\l\m\x\i\o\7\x\h\w\c\q\t\a\w\a\b\e\l\h\r\p\k\i\c\9\y\h\1\z\m\l\g\i\1\8\1\d\q\6\9\s\h\1\v\x\4\u\k\v\t\d\m\2\z\r\n\f\7\q\y\c\m\y\i\m\o\t\n\1\b\p\9\j\l\h\f\b\a\q\x\8\h\n\1\j\7\r\q\c\x\q\1\f\d\h\7\p\d\g\g\g\k\l\o\9\o\n\7\z ]] 00:08:12.414 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.414 04:04:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:12.414 [2024-07-23 04:04:05.665268] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:12.414 [2024-07-23 04:04:05.665363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76834 ] 00:08:12.672 [2024-07-23 04:04:05.786743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.672 [2024-07-23 04:04:05.804357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.672 [2024-07-23 04:04:05.869148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.672 [2024-07-23 04:04:05.925651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.931  Copying: 512/512 [B] (average 500 kBps) 00:08:12.931 00:08:12.931 04:04:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8n2ejo80ca0eulpii03hzfzz6n7zh9hxbui4grl28dvgfolslnpp8jo61z7jm7ub0n9k6yjeeup0j8o3wglzlqjuu9oz8jtxipg3pkqsx274xbzo6xzkwqbdqyao4xsmr34u0xzvy9dpb6elwnxqxye09sfemgodbu8g07iqsvu3dw221vxyqtrg05rb190gewom8rjtta7zuluu5tbj9iw1aavi0rwuq26lr48ir8tvvu1pukh91wbr1tz0a8qw3seqvm8eujavzail858671o3bgvw0gl6jh4oiu2ikin3x7czsjnvcox6zrjda4yg7h2me3aae49wk8pjxrz45zfbc56xo1py1xjddn921hj422svwzkbvqohvtv6wa4dyjpf8gi0pgi0ze1h0jplmxio7xhwcqtawabelhrpkic9yh1zmlgi181dq69sh1vx4ukvtdm2zrnf7qycmyimotn1bp9jlhfbaqx8hn1j7rqcxq1fdh7pdgggklo9on7z == \8\n\2\e\j\o\8\0\c\a\0\e\u\l\p\i\i\0\3\h\z\f\z\z\6\n\7\z\h\9\h\x\b\u\i\4\g\r\l\2\8\d\v\g\f\o\l\s\l\n\p\p\8\j\o\6\1\z\7\j\m\7\u\b\0\n\9\k\6\y\j\e\e\u\p\0\j\8\o\3\w\g\l\z\l\q\j\u\u\9\o\z\8\j\t\x\i\p\g\3\p\k\q\s\x\2\7\4\x\b\z\o\6\x\z\k\w\q\b\d\q\y\a\o\4\x\s\m\r\3\4\u\0\x\z\v\y\9\d\p\b\6\e\l\w\n\x\q\x\y\e\0\9\s\f\e\m\g\o\d\b\u\8\g\0\7\i\q\s\v\u\3\d\w\2\2\1\v\x\y\q\t\r\g\0\5\r\b\1\9\0\g\e\w\o\m\8\r\j\t\t\a\7\z\u\l\u\u\5\t\b\j\9\i\w\1\a\a\v\i\0\r\w\u\q\2\6\l\r\4\8\i\r\8\t\v\v\u\1\p\u\k\h\9\1\w\b\r\1\t\z\0\a\8\q\w\3\s\e\q\v\m\8\e\u\j\a\v\z\a\i\l\8\5\8\6\7\1\o\3\b\g\v\w\0\g\l\6\j\h\4\o\i\u\2\i\k\i\n\3\x\7\c\z\s\j\n\v\c\o\x\6\z\r\j\d\a\4\y\g\7\h\2\m\e\3\a\a\e\4\9\w\k\8\p\j\x\r\z\4\5\z\f\b\c\5\6\x\o\1\p\y\1\x\j\d\d\n\9\2\1\h\j\4\2\2\s\v\w\z\k\b\v\q\o\h\v\t\v\6\w\a\4\d\y\j\p\f\8\g\i\0\p\g\i\0\z\e\1\h\0\j\p\l\m\x\i\o\7\x\h\w\c\q\t\a\w\a\b\e\l\h\r\p\k\i\c\9\y\h\1\z\m\l\g\i\1\8\1\d\q\6\9\s\h\1\v\x\4\u\k\v\t\d\m\2\z\r\n\f\7\q\y\c\m\y\i\m\o\t\n\1\b\p\9\j\l\h\f\b\a\q\x\8\h\n\1\j\7\r\q\c\x\q\1\f\d\h\7\p\d\g\g\g\k\l\o\9\o\n\7\z ]] 00:08:12.931 04:04:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.931 04:04:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:12.931 [2024-07-23 04:04:06.212849] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:12.931 [2024-07-23 04:04:06.212975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76838 ] 00:08:13.189 [2024-07-23 04:04:06.333617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.189 [2024-07-23 04:04:06.348530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.189 [2024-07-23 04:04:06.415957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.189 [2024-07-23 04:04:06.476910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.447  Copying: 512/512 [B] (average 83 kBps) 00:08:13.447 00:08:13.447 04:04:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8n2ejo80ca0eulpii03hzfzz6n7zh9hxbui4grl28dvgfolslnpp8jo61z7jm7ub0n9k6yjeeup0j8o3wglzlqjuu9oz8jtxipg3pkqsx274xbzo6xzkwqbdqyao4xsmr34u0xzvy9dpb6elwnxqxye09sfemgodbu8g07iqsvu3dw221vxyqtrg05rb190gewom8rjtta7zuluu5tbj9iw1aavi0rwuq26lr48ir8tvvu1pukh91wbr1tz0a8qw3seqvm8eujavzail858671o3bgvw0gl6jh4oiu2ikin3x7czsjnvcox6zrjda4yg7h2me3aae49wk8pjxrz45zfbc56xo1py1xjddn921hj422svwzkbvqohvtv6wa4dyjpf8gi0pgi0ze1h0jplmxio7xhwcqtawabelhrpkic9yh1zmlgi181dq69sh1vx4ukvtdm2zrnf7qycmyimotn1bp9jlhfbaqx8hn1j7rqcxq1fdh7pdgggklo9on7z == \8\n\2\e\j\o\8\0\c\a\0\e\u\l\p\i\i\0\3\h\z\f\z\z\6\n\7\z\h\9\h\x\b\u\i\4\g\r\l\2\8\d\v\g\f\o\l\s\l\n\p\p\8\j\o\6\1\z\7\j\m\7\u\b\0\n\9\k\6\y\j\e\e\u\p\0\j\8\o\3\w\g\l\z\l\q\j\u\u\9\o\z\8\j\t\x\i\p\g\3\p\k\q\s\x\2\7\4\x\b\z\o\6\x\z\k\w\q\b\d\q\y\a\o\4\x\s\m\r\3\4\u\0\x\z\v\y\9\d\p\b\6\e\l\w\n\x\q\x\y\e\0\9\s\f\e\m\g\o\d\b\u\8\g\0\7\i\q\s\v\u\3\d\w\2\2\1\v\x\y\q\t\r\g\0\5\r\b\1\9\0\g\e\w\o\m\8\r\j\t\t\a\7\z\u\l\u\u\5\t\b\j\9\i\w\1\a\a\v\i\0\r\w\u\q\2\6\l\r\4\8\i\r\8\t\v\v\u\1\p\u\k\h\9\1\w\b\r\1\t\z\0\a\8\q\w\3\s\e\q\v\m\8\e\u\j\a\v\z\a\i\l\8\5\8\6\7\1\o\3\b\g\v\w\0\g\l\6\j\h\4\o\i\u\2\i\k\i\n\3\x\7\c\z\s\j\n\v\c\o\x\6\z\r\j\d\a\4\y\g\7\h\2\m\e\3\a\a\e\4\9\w\k\8\p\j\x\r\z\4\5\z\f\b\c\5\6\x\o\1\p\y\1\x\j\d\d\n\9\2\1\h\j\4\2\2\s\v\w\z\k\b\v\q\o\h\v\t\v\6\w\a\4\d\y\j\p\f\8\g\i\0\p\g\i\0\z\e\1\h\0\j\p\l\m\x\i\o\7\x\h\w\c\q\t\a\w\a\b\e\l\h\r\p\k\i\c\9\y\h\1\z\m\l\g\i\1\8\1\d\q\6\9\s\h\1\v\x\4\u\k\v\t\d\m\2\z\r\n\f\7\q\y\c\m\y\i\m\o\t\n\1\b\p\9\j\l\h\f\b\a\q\x\8\h\n\1\j\7\r\q\c\x\q\1\f\d\h\7\p\d\g\g\g\k\l\o\9\o\n\7\z ]] 00:08:13.447 04:04:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:13.447 04:04:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:13.447 [2024-07-23 04:04:06.757488] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:13.447 [2024-07-23 04:04:06.757588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76853 ] 00:08:13.705 [2024-07-23 04:04:06.875211] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.705 [2024-07-23 04:04:06.890772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.705 [2024-07-23 04:04:06.956696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.705 [2024-07-23 04:04:07.015311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.963  Copying: 512/512 [B] (average 500 kBps) 00:08:13.963 00:08:13.963 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8n2ejo80ca0eulpii03hzfzz6n7zh9hxbui4grl28dvgfolslnpp8jo61z7jm7ub0n9k6yjeeup0j8o3wglzlqjuu9oz8jtxipg3pkqsx274xbzo6xzkwqbdqyao4xsmr34u0xzvy9dpb6elwnxqxye09sfemgodbu8g07iqsvu3dw221vxyqtrg05rb190gewom8rjtta7zuluu5tbj9iw1aavi0rwuq26lr48ir8tvvu1pukh91wbr1tz0a8qw3seqvm8eujavzail858671o3bgvw0gl6jh4oiu2ikin3x7czsjnvcox6zrjda4yg7h2me3aae49wk8pjxrz45zfbc56xo1py1xjddn921hj422svwzkbvqohvtv6wa4dyjpf8gi0pgi0ze1h0jplmxio7xhwcqtawabelhrpkic9yh1zmlgi181dq69sh1vx4ukvtdm2zrnf7qycmyimotn1bp9jlhfbaqx8hn1j7rqcxq1fdh7pdgggklo9on7z == \8\n\2\e\j\o\8\0\c\a\0\e\u\l\p\i\i\0\3\h\z\f\z\z\6\n\7\z\h\9\h\x\b\u\i\4\g\r\l\2\8\d\v\g\f\o\l\s\l\n\p\p\8\j\o\6\1\z\7\j\m\7\u\b\0\n\9\k\6\y\j\e\e\u\p\0\j\8\o\3\w\g\l\z\l\q\j\u\u\9\o\z\8\j\t\x\i\p\g\3\p\k\q\s\x\2\7\4\x\b\z\o\6\x\z\k\w\q\b\d\q\y\a\o\4\x\s\m\r\3\4\u\0\x\z\v\y\9\d\p\b\6\e\l\w\n\x\q\x\y\e\0\9\s\f\e\m\g\o\d\b\u\8\g\0\7\i\q\s\v\u\3\d\w\2\2\1\v\x\y\q\t\r\g\0\5\r\b\1\9\0\g\e\w\o\m\8\r\j\t\t\a\7\z\u\l\u\u\5\t\b\j\9\i\w\1\a\a\v\i\0\r\w\u\q\2\6\l\r\4\8\i\r\8\t\v\v\u\1\p\u\k\h\9\1\w\b\r\1\t\z\0\a\8\q\w\3\s\e\q\v\m\8\e\u\j\a\v\z\a\i\l\8\5\8\6\7\1\o\3\b\g\v\w\0\g\l\6\j\h\4\o\i\u\2\i\k\i\n\3\x\7\c\z\s\j\n\v\c\o\x\6\z\r\j\d\a\4\y\g\7\h\2\m\e\3\a\a\e\4\9\w\k\8\p\j\x\r\z\4\5\z\f\b\c\5\6\x\o\1\p\y\1\x\j\d\d\n\9\2\1\h\j\4\2\2\s\v\w\z\k\b\v\q\o\h\v\t\v\6\w\a\4\d\y\j\p\f\8\g\i\0\p\g\i\0\z\e\1\h\0\j\p\l\m\x\i\o\7\x\h\w\c\q\t\a\w\a\b\e\l\h\r\p\k\i\c\9\y\h\1\z\m\l\g\i\1\8\1\d\q\6\9\s\h\1\v\x\4\u\k\v\t\d\m\2\z\r\n\f\7\q\y\c\m\y\i\m\o\t\n\1\b\p\9\j\l\h\f\b\a\q\x\8\h\n\1\j\7\r\q\c\x\q\1\f\d\h\7\p\d\g\g\g\k\l\o\9\o\n\7\z ]] 00:08:13.963 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:13.963 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:13.963 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:13.963 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:13.963 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:13.963 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:14.221 [2024-07-23 04:04:07.312448] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:14.221 [2024-07-23 04:04:07.312714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76857 ] 00:08:14.221 [2024-07-23 04:04:07.435421] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.221 [2024-07-23 04:04:07.452572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.221 [2024-07-23 04:04:07.514663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.478 [2024-07-23 04:04:07.569289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.478  Copying: 512/512 [B] (average 500 kBps) 00:08:14.478 00:08:14.478 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xw8ayy8uawfgvj44l1knjps5iqryst93zn5ydl93x8jouphyatmk8sda3agb8mxyn8bn20xpeoq3wlio11fma5989tt5ozaj1gymfea3hodzlohbxs6t416eo0kgz00gu69t2wq0krf8y8ssxxt2nt8vggkjv52z4iwkfstd448o2z350kruv8aufjfxwqm7rhp68tth1dyuekvtbcoj33wv22zh2fij7hy10geebbu3twa9iexne3ebgcfawmgfp33b1hsrpfpm6dgrm0j5kf2i4fkf7ism6bugh3pr6d8k2vxtzcxjo756kid83vv6wpjopjgzkmxjng0dqq5l9y1utuq0uc4jhqjqyqwya3maj9wefncepq55vclfhckj1knw0pa4qdblmxtgyyj2r6i57ajultj2uextzu86s3446qa3sntoavwuutrefupoq93921o316gcbluhrm4tpg71bjq0umzklars6vpuxs80fw5a0q3kjnwl9o5lb62m == \x\w\8\a\y\y\8\u\a\w\f\g\v\j\4\4\l\1\k\n\j\p\s\5\i\q\r\y\s\t\9\3\z\n\5\y\d\l\9\3\x\8\j\o\u\p\h\y\a\t\m\k\8\s\d\a\3\a\g\b\8\m\x\y\n\8\b\n\2\0\x\p\e\o\q\3\w\l\i\o\1\1\f\m\a\5\9\8\9\t\t\5\o\z\a\j\1\g\y\m\f\e\a\3\h\o\d\z\l\o\h\b\x\s\6\t\4\1\6\e\o\0\k\g\z\0\0\g\u\6\9\t\2\w\q\0\k\r\f\8\y\8\s\s\x\x\t\2\n\t\8\v\g\g\k\j\v\5\2\z\4\i\w\k\f\s\t\d\4\4\8\o\2\z\3\5\0\k\r\u\v\8\a\u\f\j\f\x\w\q\m\7\r\h\p\6\8\t\t\h\1\d\y\u\e\k\v\t\b\c\o\j\3\3\w\v\2\2\z\h\2\f\i\j\7\h\y\1\0\g\e\e\b\b\u\3\t\w\a\9\i\e\x\n\e\3\e\b\g\c\f\a\w\m\g\f\p\3\3\b\1\h\s\r\p\f\p\m\6\d\g\r\m\0\j\5\k\f\2\i\4\f\k\f\7\i\s\m\6\b\u\g\h\3\p\r\6\d\8\k\2\v\x\t\z\c\x\j\o\7\5\6\k\i\d\8\3\v\v\6\w\p\j\o\p\j\g\z\k\m\x\j\n\g\0\d\q\q\5\l\9\y\1\u\t\u\q\0\u\c\4\j\h\q\j\q\y\q\w\y\a\3\m\a\j\9\w\e\f\n\c\e\p\q\5\5\v\c\l\f\h\c\k\j\1\k\n\w\0\p\a\4\q\d\b\l\m\x\t\g\y\y\j\2\r\6\i\5\7\a\j\u\l\t\j\2\u\e\x\t\z\u\8\6\s\3\4\4\6\q\a\3\s\n\t\o\a\v\w\u\u\t\r\e\f\u\p\o\q\9\3\9\2\1\o\3\1\6\g\c\b\l\u\h\r\m\4\t\p\g\7\1\b\j\q\0\u\m\z\k\l\a\r\s\6\v\p\u\x\s\8\0\f\w\5\a\0\q\3\k\j\n\w\l\9\o\5\l\b\6\2\m ]] 00:08:14.478 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:14.478 04:04:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:14.735 [2024-07-23 04:04:07.871828] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:14.735 [2024-07-23 04:04:07.871989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76872 ] 00:08:14.735 [2024-07-23 04:04:08.001121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.735 [2024-07-23 04:04:08.019273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.993 [2024-07-23 04:04:08.127709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.993 [2024-07-23 04:04:08.187285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.250  Copying: 512/512 [B] (average 500 kBps) 00:08:15.250 00:08:15.250 04:04:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xw8ayy8uawfgvj44l1knjps5iqryst93zn5ydl93x8jouphyatmk8sda3agb8mxyn8bn20xpeoq3wlio11fma5989tt5ozaj1gymfea3hodzlohbxs6t416eo0kgz00gu69t2wq0krf8y8ssxxt2nt8vggkjv52z4iwkfstd448o2z350kruv8aufjfxwqm7rhp68tth1dyuekvtbcoj33wv22zh2fij7hy10geebbu3twa9iexne3ebgcfawmgfp33b1hsrpfpm6dgrm0j5kf2i4fkf7ism6bugh3pr6d8k2vxtzcxjo756kid83vv6wpjopjgzkmxjng0dqq5l9y1utuq0uc4jhqjqyqwya3maj9wefncepq55vclfhckj1knw0pa4qdblmxtgyyj2r6i57ajultj2uextzu86s3446qa3sntoavwuutrefupoq93921o316gcbluhrm4tpg71bjq0umzklars6vpuxs80fw5a0q3kjnwl9o5lb62m == \x\w\8\a\y\y\8\u\a\w\f\g\v\j\4\4\l\1\k\n\j\p\s\5\i\q\r\y\s\t\9\3\z\n\5\y\d\l\9\3\x\8\j\o\u\p\h\y\a\t\m\k\8\s\d\a\3\a\g\b\8\m\x\y\n\8\b\n\2\0\x\p\e\o\q\3\w\l\i\o\1\1\f\m\a\5\9\8\9\t\t\5\o\z\a\j\1\g\y\m\f\e\a\3\h\o\d\z\l\o\h\b\x\s\6\t\4\1\6\e\o\0\k\g\z\0\0\g\u\6\9\t\2\w\q\0\k\r\f\8\y\8\s\s\x\x\t\2\n\t\8\v\g\g\k\j\v\5\2\z\4\i\w\k\f\s\t\d\4\4\8\o\2\z\3\5\0\k\r\u\v\8\a\u\f\j\f\x\w\q\m\7\r\h\p\6\8\t\t\h\1\d\y\u\e\k\v\t\b\c\o\j\3\3\w\v\2\2\z\h\2\f\i\j\7\h\y\1\0\g\e\e\b\b\u\3\t\w\a\9\i\e\x\n\e\3\e\b\g\c\f\a\w\m\g\f\p\3\3\b\1\h\s\r\p\f\p\m\6\d\g\r\m\0\j\5\k\f\2\i\4\f\k\f\7\i\s\m\6\b\u\g\h\3\p\r\6\d\8\k\2\v\x\t\z\c\x\j\o\7\5\6\k\i\d\8\3\v\v\6\w\p\j\o\p\j\g\z\k\m\x\j\n\g\0\d\q\q\5\l\9\y\1\u\t\u\q\0\u\c\4\j\h\q\j\q\y\q\w\y\a\3\m\a\j\9\w\e\f\n\c\e\p\q\5\5\v\c\l\f\h\c\k\j\1\k\n\w\0\p\a\4\q\d\b\l\m\x\t\g\y\y\j\2\r\6\i\5\7\a\j\u\l\t\j\2\u\e\x\t\z\u\8\6\s\3\4\4\6\q\a\3\s\n\t\o\a\v\w\u\u\t\r\e\f\u\p\o\q\9\3\9\2\1\o\3\1\6\g\c\b\l\u\h\r\m\4\t\p\g\7\1\b\j\q\0\u\m\z\k\l\a\r\s\6\v\p\u\x\s\8\0\f\w\5\a\0\q\3\k\j\n\w\l\9\o\5\l\b\6\2\m ]] 00:08:15.250 04:04:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:15.250 04:04:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:15.250 [2024-07-23 04:04:08.496828] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:15.250 [2024-07-23 04:04:08.496986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76876 ] 00:08:15.508 [2024-07-23 04:04:08.625228] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:15.508 [2024-07-23 04:04:08.638417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.508 [2024-07-23 04:04:08.742762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.508 [2024-07-23 04:04:08.799670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.766  Copying: 512/512 [B] (average 166 kBps) 00:08:15.766 00:08:15.766 04:04:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xw8ayy8uawfgvj44l1knjps5iqryst93zn5ydl93x8jouphyatmk8sda3agb8mxyn8bn20xpeoq3wlio11fma5989tt5ozaj1gymfea3hodzlohbxs6t416eo0kgz00gu69t2wq0krf8y8ssxxt2nt8vggkjv52z4iwkfstd448o2z350kruv8aufjfxwqm7rhp68tth1dyuekvtbcoj33wv22zh2fij7hy10geebbu3twa9iexne3ebgcfawmgfp33b1hsrpfpm6dgrm0j5kf2i4fkf7ism6bugh3pr6d8k2vxtzcxjo756kid83vv6wpjopjgzkmxjng0dqq5l9y1utuq0uc4jhqjqyqwya3maj9wefncepq55vclfhckj1knw0pa4qdblmxtgyyj2r6i57ajultj2uextzu86s3446qa3sntoavwuutrefupoq93921o316gcbluhrm4tpg71bjq0umzklars6vpuxs80fw5a0q3kjnwl9o5lb62m == \x\w\8\a\y\y\8\u\a\w\f\g\v\j\4\4\l\1\k\n\j\p\s\5\i\q\r\y\s\t\9\3\z\n\5\y\d\l\9\3\x\8\j\o\u\p\h\y\a\t\m\k\8\s\d\a\3\a\g\b\8\m\x\y\n\8\b\n\2\0\x\p\e\o\q\3\w\l\i\o\1\1\f\m\a\5\9\8\9\t\t\5\o\z\a\j\1\g\y\m\f\e\a\3\h\o\d\z\l\o\h\b\x\s\6\t\4\1\6\e\o\0\k\g\z\0\0\g\u\6\9\t\2\w\q\0\k\r\f\8\y\8\s\s\x\x\t\2\n\t\8\v\g\g\k\j\v\5\2\z\4\i\w\k\f\s\t\d\4\4\8\o\2\z\3\5\0\k\r\u\v\8\a\u\f\j\f\x\w\q\m\7\r\h\p\6\8\t\t\h\1\d\y\u\e\k\v\t\b\c\o\j\3\3\w\v\2\2\z\h\2\f\i\j\7\h\y\1\0\g\e\e\b\b\u\3\t\w\a\9\i\e\x\n\e\3\e\b\g\c\f\a\w\m\g\f\p\3\3\b\1\h\s\r\p\f\p\m\6\d\g\r\m\0\j\5\k\f\2\i\4\f\k\f\7\i\s\m\6\b\u\g\h\3\p\r\6\d\8\k\2\v\x\t\z\c\x\j\o\7\5\6\k\i\d\8\3\v\v\6\w\p\j\o\p\j\g\z\k\m\x\j\n\g\0\d\q\q\5\l\9\y\1\u\t\u\q\0\u\c\4\j\h\q\j\q\y\q\w\y\a\3\m\a\j\9\w\e\f\n\c\e\p\q\5\5\v\c\l\f\h\c\k\j\1\k\n\w\0\p\a\4\q\d\b\l\m\x\t\g\y\y\j\2\r\6\i\5\7\a\j\u\l\t\j\2\u\e\x\t\z\u\8\6\s\3\4\4\6\q\a\3\s\n\t\o\a\v\w\u\u\t\r\e\f\u\p\o\q\9\3\9\2\1\o\3\1\6\g\c\b\l\u\h\r\m\4\t\p\g\7\1\b\j\q\0\u\m\z\k\l\a\r\s\6\v\p\u\x\s\8\0\f\w\5\a\0\q\3\k\j\n\w\l\9\o\5\l\b\6\2\m ]] 00:08:15.766 04:04:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:15.766 04:04:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:15.766 [2024-07-23 04:04:09.101847] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:15.766 [2024-07-23 04:04:09.102026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76891 ] 00:08:16.024 [2024-07-23 04:04:09.229240] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.024 [2024-07-23 04:04:09.245599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.024 [2024-07-23 04:04:09.349382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.282 [2024-07-23 04:04:09.405350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.540  Copying: 512/512 [B] (average 250 kBps) 00:08:16.540 00:08:16.540 ************************************ 00:08:16.540 END TEST dd_flags_misc 00:08:16.540 ************************************ 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xw8ayy8uawfgvj44l1knjps5iqryst93zn5ydl93x8jouphyatmk8sda3agb8mxyn8bn20xpeoq3wlio11fma5989tt5ozaj1gymfea3hodzlohbxs6t416eo0kgz00gu69t2wq0krf8y8ssxxt2nt8vggkjv52z4iwkfstd448o2z350kruv8aufjfxwqm7rhp68tth1dyuekvtbcoj33wv22zh2fij7hy10geebbu3twa9iexne3ebgcfawmgfp33b1hsrpfpm6dgrm0j5kf2i4fkf7ism6bugh3pr6d8k2vxtzcxjo756kid83vv6wpjopjgzkmxjng0dqq5l9y1utuq0uc4jhqjqyqwya3maj9wefncepq55vclfhckj1knw0pa4qdblmxtgyyj2r6i57ajultj2uextzu86s3446qa3sntoavwuutrefupoq93921o316gcbluhrm4tpg71bjq0umzklars6vpuxs80fw5a0q3kjnwl9o5lb62m == \x\w\8\a\y\y\8\u\a\w\f\g\v\j\4\4\l\1\k\n\j\p\s\5\i\q\r\y\s\t\9\3\z\n\5\y\d\l\9\3\x\8\j\o\u\p\h\y\a\t\m\k\8\s\d\a\3\a\g\b\8\m\x\y\n\8\b\n\2\0\x\p\e\o\q\3\w\l\i\o\1\1\f\m\a\5\9\8\9\t\t\5\o\z\a\j\1\g\y\m\f\e\a\3\h\o\d\z\l\o\h\b\x\s\6\t\4\1\6\e\o\0\k\g\z\0\0\g\u\6\9\t\2\w\q\0\k\r\f\8\y\8\s\s\x\x\t\2\n\t\8\v\g\g\k\j\v\5\2\z\4\i\w\k\f\s\t\d\4\4\8\o\2\z\3\5\0\k\r\u\v\8\a\u\f\j\f\x\w\q\m\7\r\h\p\6\8\t\t\h\1\d\y\u\e\k\v\t\b\c\o\j\3\3\w\v\2\2\z\h\2\f\i\j\7\h\y\1\0\g\e\e\b\b\u\3\t\w\a\9\i\e\x\n\e\3\e\b\g\c\f\a\w\m\g\f\p\3\3\b\1\h\s\r\p\f\p\m\6\d\g\r\m\0\j\5\k\f\2\i\4\f\k\f\7\i\s\m\6\b\u\g\h\3\p\r\6\d\8\k\2\v\x\t\z\c\x\j\o\7\5\6\k\i\d\8\3\v\v\6\w\p\j\o\p\j\g\z\k\m\x\j\n\g\0\d\q\q\5\l\9\y\1\u\t\u\q\0\u\c\4\j\h\q\j\q\y\q\w\y\a\3\m\a\j\9\w\e\f\n\c\e\p\q\5\5\v\c\l\f\h\c\k\j\1\k\n\w\0\p\a\4\q\d\b\l\m\x\t\g\y\y\j\2\r\6\i\5\7\a\j\u\l\t\j\2\u\e\x\t\z\u\8\6\s\3\4\4\6\q\a\3\s\n\t\o\a\v\w\u\u\t\r\e\f\u\p\o\q\9\3\9\2\1\o\3\1\6\g\c\b\l\u\h\r\m\4\t\p\g\7\1\b\j\q\0\u\m\z\k\l\a\r\s\6\v\p\u\x\s\8\0\f\w\5\a\0\q\3\k\j\n\w\l\9\o\5\l\b\6\2\m ]] 00:08:16.540 00:08:16.540 real 0m4.613s 00:08:16.540 user 0m2.524s 00:08:16.540 sys 0m2.246s 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:16.540 * Second test run, disabling liburing, forcing AIO 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:16.540 ************************************ 00:08:16.540 START TEST dd_flag_append_forced_aio 00:08:16.540 ************************************ 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:16.540 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=r1ags47txozn4x8o32pb7tgjy988hun2 00:08:16.541 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:16.541 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:16.541 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:16.541 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=46t1tch58ts62t63qjn5faz58dixzl07 00:08:16.541 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s r1ags47txozn4x8o32pb7tgjy988hun2 00:08:16.541 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 46t1tch58ts62t63qjn5faz58dixzl07 00:08:16.541 04:04:09 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:16.541 [2024-07-23 04:04:09.760026] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:16.541 [2024-07-23 04:04:09.760146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76920 ] 00:08:16.798 [2024-07-23 04:04:09.882120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.798 [2024-07-23 04:04:09.896988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.798 [2024-07-23 04:04:10.002448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.798 [2024-07-23 04:04:10.061177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.056  Copying: 32/32 [B] (average 31 kBps) 00:08:17.056 00:08:17.056 ************************************ 00:08:17.056 END TEST dd_flag_append_forced_aio 00:08:17.056 ************************************ 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 46t1tch58ts62t63qjn5faz58dixzl07r1ags47txozn4x8o32pb7tgjy988hun2 == \4\6\t\1\t\c\h\5\8\t\s\6\2\t\6\3\q\j\n\5\f\a\z\5\8\d\i\x\z\l\0\7\r\1\a\g\s\4\7\t\x\o\z\n\4\x\8\o\3\2\p\b\7\t\g\j\y\9\8\8\h\u\n\2 ]] 00:08:17.056 00:08:17.056 real 0m0.617s 00:08:17.056 user 0m0.340s 00:08:17.056 sys 0m0.154s 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:17.056 ************************************ 00:08:17.056 START TEST dd_flag_directory_forced_aio 00:08:17.056 ************************************ 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.056 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.057 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.057 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.057 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.057 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.057 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.057 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.057 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.057 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.315 [2024-07-23 04:04:10.437756] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:17.315 [2024-07-23 04:04:10.437908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76946 ] 00:08:17.315 [2024-07-23 04:04:10.563246] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.315 [2024-07-23 04:04:10.581732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.573 [2024-07-23 04:04:10.662287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.573 [2024-07-23 04:04:10.719229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.573 [2024-07-23 04:04:10.749456] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:17.573 [2024-07-23 04:04:10.749514] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:17.573 [2024-07-23 04:04:10.749532] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.573 [2024-07-23 04:04:10.862791] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:17.872 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:08:17.872 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:17.872 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:08:17.872 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:17.872 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:17.872 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.873 04:04:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:17.873 [2024-07-23 04:04:11.000610] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:17.873 [2024-07-23 04:04:11.000695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76961 ] 00:08:17.873 [2024-07-23 04:04:11.121194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.873 [2024-07-23 04:04:11.138537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.873 [2024-07-23 04:04:11.200778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.132 [2024-07-23 04:04:11.256007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.132 [2024-07-23 04:04:11.285742] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:18.132 [2024-07-23 04:04:11.285802] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:18.132 [2024-07-23 04:04:11.285820] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.132 [2024-07-23 04:04:11.388888] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:18.391 ************************************ 00:08:18.391 END TEST dd_flag_directory_forced_aio 00:08:18.391 ************************************ 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:18.391 00:08:18.391 real 0m1.114s 00:08:18.391 user 0m0.588s 00:08:18.391 sys 0m0.314s 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:18.391 ************************************ 00:08:18.391 START TEST dd_flag_nofollow_forced_aio 00:08:18.391 ************************************ 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.391 04:04:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.391 [2024-07-23 04:04:11.602736] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:18.391 [2024-07-23 04:04:11.602798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76990 ] 00:08:18.391 [2024-07-23 04:04:11.716860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.391 [2024-07-23 04:04:11.730411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.650 [2024-07-23 04:04:11.790321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.650 [2024-07-23 04:04:11.841923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.650 [2024-07-23 04:04:11.871788] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:18.650 [2024-07-23 04:04:11.871833] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:18.650 [2024-07-23 04:04:11.871858] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.650 [2024-07-23 04:04:11.978754] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.909 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.909 [2024-07-23 04:04:12.130045] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:18.909 [2024-07-23 04:04:12.130178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76999 ] 00:08:19.169 [2024-07-23 04:04:12.253413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.169 [2024-07-23 04:04:12.268835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.169 [2024-07-23 04:04:12.317716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.169 [2024-07-23 04:04:12.368665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.169 [2024-07-23 04:04:12.396426] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:19.169 [2024-07-23 04:04:12.396480] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:19.169 [2024-07-23 04:04:12.396494] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.169 [2024-07-23 04:04:12.505304] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:19.428 04:04:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.428 [2024-07-23 04:04:12.645297] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:19.428 [2024-07-23 04:04:12.645363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77007 ] 00:08:19.428 [2024-07-23 04:04:12.759557] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.687 [2024-07-23 04:04:12.773119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.687 [2024-07-23 04:04:12.828345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.687 [2024-07-23 04:04:12.879271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.945  Copying: 512/512 [B] (average 500 kBps) 00:08:19.945 00:08:19.945 ************************************ 00:08:19.945 END TEST dd_flag_nofollow_forced_aio 00:08:19.945 ************************************ 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ qvgifpfg26c49tk5o44r2cni3knqfwif6wt6xc07l0gnoc15jrtqpy9mchgk7w0r8gpkuojw1xwzc0b6ezwx8kem7f2jxq37iokjunblimkaaso9wfs4os7qe2gpeusssopd2rbw3u9gr9cdpahdiodopgkqtwdpc93cqjk2xypdfgl9qj39hissz6gom6ezlpr7b3d8505cfnqf13942zbzdjocdah443rpoi78nhgzuwhoieuiv4munbmrojzk3ov95i4t79wn2wqkyvbeuhenc4ar6hxbhec5t6btfj5fp42nk0cvikrqnq6rkfntgv5yemgcbvueflzpxdkvbwxij73msaemledd6g9k11v80fm3gu1s4j3e469nq4jfn2iok8z91prmnfyuf6uxgdsupbh3axedm8jr6fe1anygvndww6ni92ci44um1y20j042jh2so4qn0per3shehmneg4kmjsx0s2iw8a44n79n6dcgbubpwskeyfoph59f == \q\v\g\i\f\p\f\g\2\6\c\4\9\t\k\5\o\4\4\r\2\c\n\i\3\k\n\q\f\w\i\f\6\w\t\6\x\c\0\7\l\0\g\n\o\c\1\5\j\r\t\q\p\y\9\m\c\h\g\k\7\w\0\r\8\g\p\k\u\o\j\w\1\x\w\z\c\0\b\6\e\z\w\x\8\k\e\m\7\f\2\j\x\q\3\7\i\o\k\j\u\n\b\l\i\m\k\a\a\s\o\9\w\f\s\4\o\s\7\q\e\2\g\p\e\u\s\s\s\o\p\d\2\r\b\w\3\u\9\g\r\9\c\d\p\a\h\d\i\o\d\o\p\g\k\q\t\w\d\p\c\9\3\c\q\j\k\2\x\y\p\d\f\g\l\9\q\j\3\9\h\i\s\s\z\6\g\o\m\6\e\z\l\p\r\7\b\3\d\8\5\0\5\c\f\n\q\f\1\3\9\4\2\z\b\z\d\j\o\c\d\a\h\4\4\3\r\p\o\i\7\8\n\h\g\z\u\w\h\o\i\e\u\i\v\4\m\u\n\b\m\r\o\j\z\k\3\o\v\9\5\i\4\t\7\9\w\n\2\w\q\k\y\v\b\e\u\h\e\n\c\4\a\r\6\h\x\b\h\e\c\5\t\6\b\t\f\j\5\f\p\4\2\n\k\0\c\v\i\k\r\q\n\q\6\r\k\f\n\t\g\v\5\y\e\m\g\c\b\v\u\e\f\l\z\p\x\d\k\v\b\w\x\i\j\7\3\m\s\a\e\m\l\e\d\d\6\g\9\k\1\1\v\8\0\f\m\3\g\u\1\s\4\j\3\e\4\6\9\n\q\4\j\f\n\2\i\o\k\8\z\9\1\p\r\m\n\f\y\u\f\6\u\x\g\d\s\u\p\b\h\3\a\x\e\d\m\8\j\r\6\f\e\1\a\n\y\g\v\n\d\w\w\6\n\i\9\2\c\i\4\4\u\m\1\y\2\0\j\0\4\2\j\h\2\s\o\4\q\n\0\p\e\r\3\s\h\e\h\m\n\e\g\4\k\m\j\s\x\0\s\2\i\w\8\a\4\4\n\7\9\n\6\d\c\g\b\u\b\p\w\s\k\e\y\f\o\p\h\5\9\f ]] 00:08:19.946 00:08:19.946 real 0m1.573s 00:08:19.946 user 0m0.813s 00:08:19.946 sys 0m0.430s 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:19.946 ************************************ 00:08:19.946 START TEST dd_flag_noatime_forced_aio 00:08:19.946 ************************************ 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721707452 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721707453 00:08:19.946 04:04:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:20.879 04:04:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.137 [2024-07-23 04:04:14.269852] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:21.137 [2024-07-23 04:04:14.269975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77047 ] 00:08:21.137 [2024-07-23 04:04:14.391632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.137 [2024-07-23 04:04:14.412212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.137 [2024-07-23 04:04:14.473932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.395 [2024-07-23 04:04:14.529815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.654  Copying: 512/512 [B] (average 500 kBps) 00:08:21.654 00:08:21.654 04:04:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.654 04:04:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721707452 )) 00:08:21.654 04:04:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.654 04:04:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721707453 )) 00:08:21.654 04:04:14 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.654 [2024-07-23 04:04:14.827936] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:21.654 [2024-07-23 04:04:14.827999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77059 ] 00:08:21.654 [2024-07-23 04:04:14.942224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.654 [2024-07-23 04:04:14.954860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.913 [2024-07-23 04:04:15.009353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.913 [2024-07-23 04:04:15.062770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.172  Copying: 512/512 [B] (average 500 kBps) 00:08:22.172 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.172 ************************************ 00:08:22.172 END TEST dd_flag_noatime_forced_aio 00:08:22.172 ************************************ 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721707455 )) 00:08:22.172 00:08:22.172 real 0m2.123s 00:08:22.172 user 0m0.576s 00:08:22.172 sys 0m0.302s 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:22.172 ************************************ 00:08:22.172 START TEST dd_flags_misc_forced_aio 00:08:22.172 ************************************ 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.172 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:22.172 [2024-07-23 04:04:15.430533] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:22.172 [2024-07-23 04:04:15.430617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77085 ] 00:08:22.430 [2024-07-23 04:04:15.551375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:22.430 [2024-07-23 04:04:15.568481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.430 [2024-07-23 04:04:15.620294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.430 [2024-07-23 04:04:15.676650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.688  Copying: 512/512 [B] (average 500 kBps) 00:08:22.688 00:08:22.688 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xprstf2donwdlqggy4csfic9yqn2120chvi798jdivcxi9olpdtzgzqstlvfie01e588mquteee1px89rubx5tpct47e8cn1jwmhgt4bifdaetx1xpu84wu8a7d7z5tpbuseescgq4dsrhrfskcyorg44u3nl2i7850jbgapw8a24x67qthbf2vtt1ui3yw4sygpc9c3tl94j4443rsxtm615fn7m0itraxsoufan02xo9zpaej5xwetdk0gmfkqk4j70hs8ip79xz14oft6rrdi3jngctt8asgzxq3u4bwuhg6pgvd6wnb7oi5upzti3j8kr2z5dx42rgavq09ll7h3o3sa7xsdhsca5jnk694ukmf3p6udz6p2zxk64gxhjbk2h0di5kdff582i0w9a7rul6qsiz09prup6vp0cqu0px2lwtss4k52pjc1ud8y2t398nzpdmatjb5xx2nw657n1sfs588xdo40e35kd6zu31e062t0dnaldd8g87h4 == \x\p\r\s\t\f\2\d\o\n\w\d\l\q\g\g\y\4\c\s\f\i\c\9\y\q\n\2\1\2\0\c\h\v\i\7\9\8\j\d\i\v\c\x\i\9\o\l\p\d\t\z\g\z\q\s\t\l\v\f\i\e\0\1\e\5\8\8\m\q\u\t\e\e\e\1\p\x\8\9\r\u\b\x\5\t\p\c\t\4\7\e\8\c\n\1\j\w\m\h\g\t\4\b\i\f\d\a\e\t\x\1\x\p\u\8\4\w\u\8\a\7\d\7\z\5\t\p\b\u\s\e\e\s\c\g\q\4\d\s\r\h\r\f\s\k\c\y\o\r\g\4\4\u\3\n\l\2\i\7\8\5\0\j\b\g\a\p\w\8\a\2\4\x\6\7\q\t\h\b\f\2\v\t\t\1\u\i\3\y\w\4\s\y\g\p\c\9\c\3\t\l\9\4\j\4\4\4\3\r\s\x\t\m\6\1\5\f\n\7\m\0\i\t\r\a\x\s\o\u\f\a\n\0\2\x\o\9\z\p\a\e\j\5\x\w\e\t\d\k\0\g\m\f\k\q\k\4\j\7\0\h\s\8\i\p\7\9\x\z\1\4\o\f\t\6\r\r\d\i\3\j\n\g\c\t\t\8\a\s\g\z\x\q\3\u\4\b\w\u\h\g\6\p\g\v\d\6\w\n\b\7\o\i\5\u\p\z\t\i\3\j\8\k\r\2\z\5\d\x\4\2\r\g\a\v\q\0\9\l\l\7\h\3\o\3\s\a\7\x\s\d\h\s\c\a\5\j\n\k\6\9\4\u\k\m\f\3\p\6\u\d\z\6\p\2\z\x\k\6\4\g\x\h\j\b\k\2\h\0\d\i\5\k\d\f\f\5\8\2\i\0\w\9\a\7\r\u\l\6\q\s\i\z\0\9\p\r\u\p\6\v\p\0\c\q\u\0\p\x\2\l\w\t\s\s\4\k\5\2\p\j\c\1\u\d\8\y\2\t\3\9\8\n\z\p\d\m\a\t\j\b\5\x\x\2\n\w\6\5\7\n\1\s\f\s\5\8\8\x\d\o\4\0\e\3\5\k\d\6\z\u\3\1\e\0\6\2\t\0\d\n\a\l\d\d\8\g\8\7\h\4 ]] 00:08:22.688 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.688 04:04:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:22.688 [2024-07-23 04:04:15.968287] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:22.689 [2024-07-23 04:04:15.968413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77093 ] 00:08:22.947 [2024-07-23 04:04:16.090807] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:22.947 [2024-07-23 04:04:16.108028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.947 [2024-07-23 04:04:16.159718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.947 [2024-07-23 04:04:16.210240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.206  Copying: 512/512 [B] (average 500 kBps) 00:08:23.206 00:08:23.206 04:04:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xprstf2donwdlqggy4csfic9yqn2120chvi798jdivcxi9olpdtzgzqstlvfie01e588mquteee1px89rubx5tpct47e8cn1jwmhgt4bifdaetx1xpu84wu8a7d7z5tpbuseescgq4dsrhrfskcyorg44u3nl2i7850jbgapw8a24x67qthbf2vtt1ui3yw4sygpc9c3tl94j4443rsxtm615fn7m0itraxsoufan02xo9zpaej5xwetdk0gmfkqk4j70hs8ip79xz14oft6rrdi3jngctt8asgzxq3u4bwuhg6pgvd6wnb7oi5upzti3j8kr2z5dx42rgavq09ll7h3o3sa7xsdhsca5jnk694ukmf3p6udz6p2zxk64gxhjbk2h0di5kdff582i0w9a7rul6qsiz09prup6vp0cqu0px2lwtss4k52pjc1ud8y2t398nzpdmatjb5xx2nw657n1sfs588xdo40e35kd6zu31e062t0dnaldd8g87h4 == \x\p\r\s\t\f\2\d\o\n\w\d\l\q\g\g\y\4\c\s\f\i\c\9\y\q\n\2\1\2\0\c\h\v\i\7\9\8\j\d\i\v\c\x\i\9\o\l\p\d\t\z\g\z\q\s\t\l\v\f\i\e\0\1\e\5\8\8\m\q\u\t\e\e\e\1\p\x\8\9\r\u\b\x\5\t\p\c\t\4\7\e\8\c\n\1\j\w\m\h\g\t\4\b\i\f\d\a\e\t\x\1\x\p\u\8\4\w\u\8\a\7\d\7\z\5\t\p\b\u\s\e\e\s\c\g\q\4\d\s\r\h\r\f\s\k\c\y\o\r\g\4\4\u\3\n\l\2\i\7\8\5\0\j\b\g\a\p\w\8\a\2\4\x\6\7\q\t\h\b\f\2\v\t\t\1\u\i\3\y\w\4\s\y\g\p\c\9\c\3\t\l\9\4\j\4\4\4\3\r\s\x\t\m\6\1\5\f\n\7\m\0\i\t\r\a\x\s\o\u\f\a\n\0\2\x\o\9\z\p\a\e\j\5\x\w\e\t\d\k\0\g\m\f\k\q\k\4\j\7\0\h\s\8\i\p\7\9\x\z\1\4\o\f\t\6\r\r\d\i\3\j\n\g\c\t\t\8\a\s\g\z\x\q\3\u\4\b\w\u\h\g\6\p\g\v\d\6\w\n\b\7\o\i\5\u\p\z\t\i\3\j\8\k\r\2\z\5\d\x\4\2\r\g\a\v\q\0\9\l\l\7\h\3\o\3\s\a\7\x\s\d\h\s\c\a\5\j\n\k\6\9\4\u\k\m\f\3\p\6\u\d\z\6\p\2\z\x\k\6\4\g\x\h\j\b\k\2\h\0\d\i\5\k\d\f\f\5\8\2\i\0\w\9\a\7\r\u\l\6\q\s\i\z\0\9\p\r\u\p\6\v\p\0\c\q\u\0\p\x\2\l\w\t\s\s\4\k\5\2\p\j\c\1\u\d\8\y\2\t\3\9\8\n\z\p\d\m\a\t\j\b\5\x\x\2\n\w\6\5\7\n\1\s\f\s\5\8\8\x\d\o\4\0\e\3\5\k\d\6\z\u\3\1\e\0\6\2\t\0\d\n\a\l\d\d\8\g\8\7\h\4 ]] 00:08:23.206 04:04:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.206 04:04:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:23.206 [2024-07-23 04:04:16.504137] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:23.206 [2024-07-23 04:04:16.504269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77100 ] 00:08:23.467 [2024-07-23 04:04:16.626519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.467 [2024-07-23 04:04:16.643695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.467 [2024-07-23 04:04:16.696855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.467 [2024-07-23 04:04:16.751846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.731  Copying: 512/512 [B] (average 500 kBps) 00:08:23.731 00:08:23.731 04:04:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xprstf2donwdlqggy4csfic9yqn2120chvi798jdivcxi9olpdtzgzqstlvfie01e588mquteee1px89rubx5tpct47e8cn1jwmhgt4bifdaetx1xpu84wu8a7d7z5tpbuseescgq4dsrhrfskcyorg44u3nl2i7850jbgapw8a24x67qthbf2vtt1ui3yw4sygpc9c3tl94j4443rsxtm615fn7m0itraxsoufan02xo9zpaej5xwetdk0gmfkqk4j70hs8ip79xz14oft6rrdi3jngctt8asgzxq3u4bwuhg6pgvd6wnb7oi5upzti3j8kr2z5dx42rgavq09ll7h3o3sa7xsdhsca5jnk694ukmf3p6udz6p2zxk64gxhjbk2h0di5kdff582i0w9a7rul6qsiz09prup6vp0cqu0px2lwtss4k52pjc1ud8y2t398nzpdmatjb5xx2nw657n1sfs588xdo40e35kd6zu31e062t0dnaldd8g87h4 == \x\p\r\s\t\f\2\d\o\n\w\d\l\q\g\g\y\4\c\s\f\i\c\9\y\q\n\2\1\2\0\c\h\v\i\7\9\8\j\d\i\v\c\x\i\9\o\l\p\d\t\z\g\z\q\s\t\l\v\f\i\e\0\1\e\5\8\8\m\q\u\t\e\e\e\1\p\x\8\9\r\u\b\x\5\t\p\c\t\4\7\e\8\c\n\1\j\w\m\h\g\t\4\b\i\f\d\a\e\t\x\1\x\p\u\8\4\w\u\8\a\7\d\7\z\5\t\p\b\u\s\e\e\s\c\g\q\4\d\s\r\h\r\f\s\k\c\y\o\r\g\4\4\u\3\n\l\2\i\7\8\5\0\j\b\g\a\p\w\8\a\2\4\x\6\7\q\t\h\b\f\2\v\t\t\1\u\i\3\y\w\4\s\y\g\p\c\9\c\3\t\l\9\4\j\4\4\4\3\r\s\x\t\m\6\1\5\f\n\7\m\0\i\t\r\a\x\s\o\u\f\a\n\0\2\x\o\9\z\p\a\e\j\5\x\w\e\t\d\k\0\g\m\f\k\q\k\4\j\7\0\h\s\8\i\p\7\9\x\z\1\4\o\f\t\6\r\r\d\i\3\j\n\g\c\t\t\8\a\s\g\z\x\q\3\u\4\b\w\u\h\g\6\p\g\v\d\6\w\n\b\7\o\i\5\u\p\z\t\i\3\j\8\k\r\2\z\5\d\x\4\2\r\g\a\v\q\0\9\l\l\7\h\3\o\3\s\a\7\x\s\d\h\s\c\a\5\j\n\k\6\9\4\u\k\m\f\3\p\6\u\d\z\6\p\2\z\x\k\6\4\g\x\h\j\b\k\2\h\0\d\i\5\k\d\f\f\5\8\2\i\0\w\9\a\7\r\u\l\6\q\s\i\z\0\9\p\r\u\p\6\v\p\0\c\q\u\0\p\x\2\l\w\t\s\s\4\k\5\2\p\j\c\1\u\d\8\y\2\t\3\9\8\n\z\p\d\m\a\t\j\b\5\x\x\2\n\w\6\5\7\n\1\s\f\s\5\8\8\x\d\o\4\0\e\3\5\k\d\6\z\u\3\1\e\0\6\2\t\0\d\n\a\l\d\d\8\g\8\7\h\4 ]] 00:08:23.732 04:04:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.732 04:04:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:23.732 [2024-07-23 04:04:17.048930] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:23.732 [2024-07-23 04:04:17.049064] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77108 ] 00:08:23.990 [2024-07-23 04:04:17.173587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.990 [2024-07-23 04:04:17.187333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.990 [2024-07-23 04:04:17.239936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.990 [2024-07-23 04:04:17.291131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:24.248  Copying: 512/512 [B] (average 250 kBps) 00:08:24.248 00:08:24.248 04:04:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xprstf2donwdlqggy4csfic9yqn2120chvi798jdivcxi9olpdtzgzqstlvfie01e588mquteee1px89rubx5tpct47e8cn1jwmhgt4bifdaetx1xpu84wu8a7d7z5tpbuseescgq4dsrhrfskcyorg44u3nl2i7850jbgapw8a24x67qthbf2vtt1ui3yw4sygpc9c3tl94j4443rsxtm615fn7m0itraxsoufan02xo9zpaej5xwetdk0gmfkqk4j70hs8ip79xz14oft6rrdi3jngctt8asgzxq3u4bwuhg6pgvd6wnb7oi5upzti3j8kr2z5dx42rgavq09ll7h3o3sa7xsdhsca5jnk694ukmf3p6udz6p2zxk64gxhjbk2h0di5kdff582i0w9a7rul6qsiz09prup6vp0cqu0px2lwtss4k52pjc1ud8y2t398nzpdmatjb5xx2nw657n1sfs588xdo40e35kd6zu31e062t0dnaldd8g87h4 == \x\p\r\s\t\f\2\d\o\n\w\d\l\q\g\g\y\4\c\s\f\i\c\9\y\q\n\2\1\2\0\c\h\v\i\7\9\8\j\d\i\v\c\x\i\9\o\l\p\d\t\z\g\z\q\s\t\l\v\f\i\e\0\1\e\5\8\8\m\q\u\t\e\e\e\1\p\x\8\9\r\u\b\x\5\t\p\c\t\4\7\e\8\c\n\1\j\w\m\h\g\t\4\b\i\f\d\a\e\t\x\1\x\p\u\8\4\w\u\8\a\7\d\7\z\5\t\p\b\u\s\e\e\s\c\g\q\4\d\s\r\h\r\f\s\k\c\y\o\r\g\4\4\u\3\n\l\2\i\7\8\5\0\j\b\g\a\p\w\8\a\2\4\x\6\7\q\t\h\b\f\2\v\t\t\1\u\i\3\y\w\4\s\y\g\p\c\9\c\3\t\l\9\4\j\4\4\4\3\r\s\x\t\m\6\1\5\f\n\7\m\0\i\t\r\a\x\s\o\u\f\a\n\0\2\x\o\9\z\p\a\e\j\5\x\w\e\t\d\k\0\g\m\f\k\q\k\4\j\7\0\h\s\8\i\p\7\9\x\z\1\4\o\f\t\6\r\r\d\i\3\j\n\g\c\t\t\8\a\s\g\z\x\q\3\u\4\b\w\u\h\g\6\p\g\v\d\6\w\n\b\7\o\i\5\u\p\z\t\i\3\j\8\k\r\2\z\5\d\x\4\2\r\g\a\v\q\0\9\l\l\7\h\3\o\3\s\a\7\x\s\d\h\s\c\a\5\j\n\k\6\9\4\u\k\m\f\3\p\6\u\d\z\6\p\2\z\x\k\6\4\g\x\h\j\b\k\2\h\0\d\i\5\k\d\f\f\5\8\2\i\0\w\9\a\7\r\u\l\6\q\s\i\z\0\9\p\r\u\p\6\v\p\0\c\q\u\0\p\x\2\l\w\t\s\s\4\k\5\2\p\j\c\1\u\d\8\y\2\t\3\9\8\n\z\p\d\m\a\t\j\b\5\x\x\2\n\w\6\5\7\n\1\s\f\s\5\8\8\x\d\o\4\0\e\3\5\k\d\6\z\u\3\1\e\0\6\2\t\0\d\n\a\l\d\d\8\g\8\7\h\4 ]] 00:08:24.248 04:04:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:24.248 04:04:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:24.248 04:04:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:24.248 04:04:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:24.248 04:04:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.248 04:04:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:24.507 [2024-07-23 04:04:17.591735] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:24.507 [2024-07-23 04:04:17.591826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77115 ] 00:08:24.507 [2024-07-23 04:04:17.712241] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.507 [2024-07-23 04:04:17.727237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.507 [2024-07-23 04:04:17.791643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.507 [2024-07-23 04:04:17.844409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:24.765  Copying: 512/512 [B] (average 500 kBps) 00:08:24.765 00:08:24.766 04:04:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n66bbilzcadeleu8xuttk8bana5onrr0eu3p6z65ajgpr55v5zx5os4ipkb82qyaa1a0y3n0miinyzrmcralz3tn8dinvtkacl23ozxgelskvexbl9dj0yjhxhv2rfoq89q7qe1662y9nrdu8m5aeugu0ue8t7pg5ybq9e2clxnm663c62546uokl60ic2wtmqpsz9okl3dzq80sfmeda9zdfq931512mmx9qfdg9ig0ppxl2a8wy4vqc7grc6380tiznlmjgfw41pv3qwyih6sq06nvgj16wmlz2twzs1oi8hl14wjvglohf2r40vunrdk4jlfj3uz6li2ocsxcbgykfcut0as3pznwwhkgzg0ll39m6ebasiw1iplq1svq6petjbo7y4a7a7tmbc996smokxjuovjenn71dt5qlglddzdolacn5zn669l9f1dycjkj1pcu8a2sf3d8vcdsd16782u0ttcb638q0v18mouecelz3zzunmk6rqyt9152 == \n\6\6\b\b\i\l\z\c\a\d\e\l\e\u\8\x\u\t\t\k\8\b\a\n\a\5\o\n\r\r\0\e\u\3\p\6\z\6\5\a\j\g\p\r\5\5\v\5\z\x\5\o\s\4\i\p\k\b\8\2\q\y\a\a\1\a\0\y\3\n\0\m\i\i\n\y\z\r\m\c\r\a\l\z\3\t\n\8\d\i\n\v\t\k\a\c\l\2\3\o\z\x\g\e\l\s\k\v\e\x\b\l\9\d\j\0\y\j\h\x\h\v\2\r\f\o\q\8\9\q\7\q\e\1\6\6\2\y\9\n\r\d\u\8\m\5\a\e\u\g\u\0\u\e\8\t\7\p\g\5\y\b\q\9\e\2\c\l\x\n\m\6\6\3\c\6\2\5\4\6\u\o\k\l\6\0\i\c\2\w\t\m\q\p\s\z\9\o\k\l\3\d\z\q\8\0\s\f\m\e\d\a\9\z\d\f\q\9\3\1\5\1\2\m\m\x\9\q\f\d\g\9\i\g\0\p\p\x\l\2\a\8\w\y\4\v\q\c\7\g\r\c\6\3\8\0\t\i\z\n\l\m\j\g\f\w\4\1\p\v\3\q\w\y\i\h\6\s\q\0\6\n\v\g\j\1\6\w\m\l\z\2\t\w\z\s\1\o\i\8\h\l\1\4\w\j\v\g\l\o\h\f\2\r\4\0\v\u\n\r\d\k\4\j\l\f\j\3\u\z\6\l\i\2\o\c\s\x\c\b\g\y\k\f\c\u\t\0\a\s\3\p\z\n\w\w\h\k\g\z\g\0\l\l\3\9\m\6\e\b\a\s\i\w\1\i\p\l\q\1\s\v\q\6\p\e\t\j\b\o\7\y\4\a\7\a\7\t\m\b\c\9\9\6\s\m\o\k\x\j\u\o\v\j\e\n\n\7\1\d\t\5\q\l\g\l\d\d\z\d\o\l\a\c\n\5\z\n\6\6\9\l\9\f\1\d\y\c\j\k\j\1\p\c\u\8\a\2\s\f\3\d\8\v\c\d\s\d\1\6\7\8\2\u\0\t\t\c\b\6\3\8\q\0\v\1\8\m\o\u\e\c\e\l\z\3\z\z\u\n\m\k\6\r\q\y\t\9\1\5\2 ]] 00:08:24.766 04:04:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.766 04:04:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:25.024 [2024-07-23 04:04:18.135819] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:25.024 [2024-07-23 04:04:18.135974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77123 ] 00:08:25.024 [2024-07-23 04:04:18.258715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.024 [2024-07-23 04:04:18.273465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.024 [2024-07-23 04:04:18.325985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.283 [2024-07-23 04:04:18.377371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.283  Copying: 512/512 [B] (average 500 kBps) 00:08:25.283 00:08:25.283 04:04:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n66bbilzcadeleu8xuttk8bana5onrr0eu3p6z65ajgpr55v5zx5os4ipkb82qyaa1a0y3n0miinyzrmcralz3tn8dinvtkacl23ozxgelskvexbl9dj0yjhxhv2rfoq89q7qe1662y9nrdu8m5aeugu0ue8t7pg5ybq9e2clxnm663c62546uokl60ic2wtmqpsz9okl3dzq80sfmeda9zdfq931512mmx9qfdg9ig0ppxl2a8wy4vqc7grc6380tiznlmjgfw41pv3qwyih6sq06nvgj16wmlz2twzs1oi8hl14wjvglohf2r40vunrdk4jlfj3uz6li2ocsxcbgykfcut0as3pznwwhkgzg0ll39m6ebasiw1iplq1svq6petjbo7y4a7a7tmbc996smokxjuovjenn71dt5qlglddzdolacn5zn669l9f1dycjkj1pcu8a2sf3d8vcdsd16782u0ttcb638q0v18mouecelz3zzunmk6rqyt9152 == \n\6\6\b\b\i\l\z\c\a\d\e\l\e\u\8\x\u\t\t\k\8\b\a\n\a\5\o\n\r\r\0\e\u\3\p\6\z\6\5\a\j\g\p\r\5\5\v\5\z\x\5\o\s\4\i\p\k\b\8\2\q\y\a\a\1\a\0\y\3\n\0\m\i\i\n\y\z\r\m\c\r\a\l\z\3\t\n\8\d\i\n\v\t\k\a\c\l\2\3\o\z\x\g\e\l\s\k\v\e\x\b\l\9\d\j\0\y\j\h\x\h\v\2\r\f\o\q\8\9\q\7\q\e\1\6\6\2\y\9\n\r\d\u\8\m\5\a\e\u\g\u\0\u\e\8\t\7\p\g\5\y\b\q\9\e\2\c\l\x\n\m\6\6\3\c\6\2\5\4\6\u\o\k\l\6\0\i\c\2\w\t\m\q\p\s\z\9\o\k\l\3\d\z\q\8\0\s\f\m\e\d\a\9\z\d\f\q\9\3\1\5\1\2\m\m\x\9\q\f\d\g\9\i\g\0\p\p\x\l\2\a\8\w\y\4\v\q\c\7\g\r\c\6\3\8\0\t\i\z\n\l\m\j\g\f\w\4\1\p\v\3\q\w\y\i\h\6\s\q\0\6\n\v\g\j\1\6\w\m\l\z\2\t\w\z\s\1\o\i\8\h\l\1\4\w\j\v\g\l\o\h\f\2\r\4\0\v\u\n\r\d\k\4\j\l\f\j\3\u\z\6\l\i\2\o\c\s\x\c\b\g\y\k\f\c\u\t\0\a\s\3\p\z\n\w\w\h\k\g\z\g\0\l\l\3\9\m\6\e\b\a\s\i\w\1\i\p\l\q\1\s\v\q\6\p\e\t\j\b\o\7\y\4\a\7\a\7\t\m\b\c\9\9\6\s\m\o\k\x\j\u\o\v\j\e\n\n\7\1\d\t\5\q\l\g\l\d\d\z\d\o\l\a\c\n\5\z\n\6\6\9\l\9\f\1\d\y\c\j\k\j\1\p\c\u\8\a\2\s\f\3\d\8\v\c\d\s\d\1\6\7\8\2\u\0\t\t\c\b\6\3\8\q\0\v\1\8\m\o\u\e\c\e\l\z\3\z\z\u\n\m\k\6\r\q\y\t\9\1\5\2 ]] 00:08:25.283 04:04:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.283 04:04:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:25.542 [2024-07-23 04:04:18.666799] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:25.542 [2024-07-23 04:04:18.666967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77130 ] 00:08:25.542 [2024-07-23 04:04:18.789430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.542 [2024-07-23 04:04:18.803970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.542 [2024-07-23 04:04:18.863494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.801 [2024-07-23 04:04:18.918988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.060  Copying: 512/512 [B] (average 125 kBps) 00:08:26.060 00:08:26.061 04:04:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n66bbilzcadeleu8xuttk8bana5onrr0eu3p6z65ajgpr55v5zx5os4ipkb82qyaa1a0y3n0miinyzrmcralz3tn8dinvtkacl23ozxgelskvexbl9dj0yjhxhv2rfoq89q7qe1662y9nrdu8m5aeugu0ue8t7pg5ybq9e2clxnm663c62546uokl60ic2wtmqpsz9okl3dzq80sfmeda9zdfq931512mmx9qfdg9ig0ppxl2a8wy4vqc7grc6380tiznlmjgfw41pv3qwyih6sq06nvgj16wmlz2twzs1oi8hl14wjvglohf2r40vunrdk4jlfj3uz6li2ocsxcbgykfcut0as3pznwwhkgzg0ll39m6ebasiw1iplq1svq6petjbo7y4a7a7tmbc996smokxjuovjenn71dt5qlglddzdolacn5zn669l9f1dycjkj1pcu8a2sf3d8vcdsd16782u0ttcb638q0v18mouecelz3zzunmk6rqyt9152 == \n\6\6\b\b\i\l\z\c\a\d\e\l\e\u\8\x\u\t\t\k\8\b\a\n\a\5\o\n\r\r\0\e\u\3\p\6\z\6\5\a\j\g\p\r\5\5\v\5\z\x\5\o\s\4\i\p\k\b\8\2\q\y\a\a\1\a\0\y\3\n\0\m\i\i\n\y\z\r\m\c\r\a\l\z\3\t\n\8\d\i\n\v\t\k\a\c\l\2\3\o\z\x\g\e\l\s\k\v\e\x\b\l\9\d\j\0\y\j\h\x\h\v\2\r\f\o\q\8\9\q\7\q\e\1\6\6\2\y\9\n\r\d\u\8\m\5\a\e\u\g\u\0\u\e\8\t\7\p\g\5\y\b\q\9\e\2\c\l\x\n\m\6\6\3\c\6\2\5\4\6\u\o\k\l\6\0\i\c\2\w\t\m\q\p\s\z\9\o\k\l\3\d\z\q\8\0\s\f\m\e\d\a\9\z\d\f\q\9\3\1\5\1\2\m\m\x\9\q\f\d\g\9\i\g\0\p\p\x\l\2\a\8\w\y\4\v\q\c\7\g\r\c\6\3\8\0\t\i\z\n\l\m\j\g\f\w\4\1\p\v\3\q\w\y\i\h\6\s\q\0\6\n\v\g\j\1\6\w\m\l\z\2\t\w\z\s\1\o\i\8\h\l\1\4\w\j\v\g\l\o\h\f\2\r\4\0\v\u\n\r\d\k\4\j\l\f\j\3\u\z\6\l\i\2\o\c\s\x\c\b\g\y\k\f\c\u\t\0\a\s\3\p\z\n\w\w\h\k\g\z\g\0\l\l\3\9\m\6\e\b\a\s\i\w\1\i\p\l\q\1\s\v\q\6\p\e\t\j\b\o\7\y\4\a\7\a\7\t\m\b\c\9\9\6\s\m\o\k\x\j\u\o\v\j\e\n\n\7\1\d\t\5\q\l\g\l\d\d\z\d\o\l\a\c\n\5\z\n\6\6\9\l\9\f\1\d\y\c\j\k\j\1\p\c\u\8\a\2\s\f\3\d\8\v\c\d\s\d\1\6\7\8\2\u\0\t\t\c\b\6\3\8\q\0\v\1\8\m\o\u\e\c\e\l\z\3\z\z\u\n\m\k\6\r\q\y\t\9\1\5\2 ]] 00:08:26.061 04:04:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.061 04:04:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:26.061 [2024-07-23 04:04:19.214334] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:26.061 [2024-07-23 04:04:19.214471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77138 ] 00:08:26.061 [2024-07-23 04:04:19.337828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.061 [2024-07-23 04:04:19.354773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.320 [2024-07-23 04:04:19.405097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.320 [2024-07-23 04:04:19.456423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.579  Copying: 512/512 [B] (average 500 kBps) 00:08:26.579 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n66bbilzcadeleu8xuttk8bana5onrr0eu3p6z65ajgpr55v5zx5os4ipkb82qyaa1a0y3n0miinyzrmcralz3tn8dinvtkacl23ozxgelskvexbl9dj0yjhxhv2rfoq89q7qe1662y9nrdu8m5aeugu0ue8t7pg5ybq9e2clxnm663c62546uokl60ic2wtmqpsz9okl3dzq80sfmeda9zdfq931512mmx9qfdg9ig0ppxl2a8wy4vqc7grc6380tiznlmjgfw41pv3qwyih6sq06nvgj16wmlz2twzs1oi8hl14wjvglohf2r40vunrdk4jlfj3uz6li2ocsxcbgykfcut0as3pznwwhkgzg0ll39m6ebasiw1iplq1svq6petjbo7y4a7a7tmbc996smokxjuovjenn71dt5qlglddzdolacn5zn669l9f1dycjkj1pcu8a2sf3d8vcdsd16782u0ttcb638q0v18mouecelz3zzunmk6rqyt9152 == \n\6\6\b\b\i\l\z\c\a\d\e\l\e\u\8\x\u\t\t\k\8\b\a\n\a\5\o\n\r\r\0\e\u\3\p\6\z\6\5\a\j\g\p\r\5\5\v\5\z\x\5\o\s\4\i\p\k\b\8\2\q\y\a\a\1\a\0\y\3\n\0\m\i\i\n\y\z\r\m\c\r\a\l\z\3\t\n\8\d\i\n\v\t\k\a\c\l\2\3\o\z\x\g\e\l\s\k\v\e\x\b\l\9\d\j\0\y\j\h\x\h\v\2\r\f\o\q\8\9\q\7\q\e\1\6\6\2\y\9\n\r\d\u\8\m\5\a\e\u\g\u\0\u\e\8\t\7\p\g\5\y\b\q\9\e\2\c\l\x\n\m\6\6\3\c\6\2\5\4\6\u\o\k\l\6\0\i\c\2\w\t\m\q\p\s\z\9\o\k\l\3\d\z\q\8\0\s\f\m\e\d\a\9\z\d\f\q\9\3\1\5\1\2\m\m\x\9\q\f\d\g\9\i\g\0\p\p\x\l\2\a\8\w\y\4\v\q\c\7\g\r\c\6\3\8\0\t\i\z\n\l\m\j\g\f\w\4\1\p\v\3\q\w\y\i\h\6\s\q\0\6\n\v\g\j\1\6\w\m\l\z\2\t\w\z\s\1\o\i\8\h\l\1\4\w\j\v\g\l\o\h\f\2\r\4\0\v\u\n\r\d\k\4\j\l\f\j\3\u\z\6\l\i\2\o\c\s\x\c\b\g\y\k\f\c\u\t\0\a\s\3\p\z\n\w\w\h\k\g\z\g\0\l\l\3\9\m\6\e\b\a\s\i\w\1\i\p\l\q\1\s\v\q\6\p\e\t\j\b\o\7\y\4\a\7\a\7\t\m\b\c\9\9\6\s\m\o\k\x\j\u\o\v\j\e\n\n\7\1\d\t\5\q\l\g\l\d\d\z\d\o\l\a\c\n\5\z\n\6\6\9\l\9\f\1\d\y\c\j\k\j\1\p\c\u\8\a\2\s\f\3\d\8\v\c\d\s\d\1\6\7\8\2\u\0\t\t\c\b\6\3\8\q\0\v\1\8\m\o\u\e\c\e\l\z\3\z\z\u\n\m\k\6\r\q\y\t\9\1\5\2 ]] 00:08:26.579 00:08:26.579 real 0m4.322s 00:08:26.579 user 0m2.203s 00:08:26.579 sys 0m1.151s 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.579 ************************************ 00:08:26.579 END TEST dd_flags_misc_forced_aio 00:08:26.579 ************************************ 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:26.579 00:08:26.579 real 0m20.489s 00:08:26.579 user 0m9.666s 00:08:26.579 sys 0m6.686s 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.579 ************************************ 00:08:26.579 END TEST spdk_dd_posix 00:08:26.579 ************************************ 00:08:26.579 04:04:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 04:04:19 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:26.579 04:04:19 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:26.579 04:04:19 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.579 04:04:19 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.579 04:04:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 ************************************ 00:08:26.579 START TEST spdk_dd_malloc 00:08:26.579 ************************************ 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:26.579 * Looking for test storage... 00:08:26.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:26.579 ************************************ 00:08:26.579 START TEST dd_malloc_copy 00:08:26.579 ************************************ 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:26.579 04:04:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.838 { 00:08:26.838 "subsystems": [ 00:08:26.838 { 00:08:26.838 "subsystem": "bdev", 00:08:26.838 "config": [ 00:08:26.838 { 00:08:26.838 "params": { 00:08:26.838 "block_size": 512, 00:08:26.838 "num_blocks": 1048576, 00:08:26.838 "name": "malloc0" 00:08:26.838 }, 00:08:26.838 "method": "bdev_malloc_create" 00:08:26.838 }, 00:08:26.838 { 00:08:26.838 "params": { 00:08:26.838 "block_size": 512, 00:08:26.838 "num_blocks": 1048576, 00:08:26.838 "name": "malloc1" 00:08:26.838 }, 00:08:26.838 "method": "bdev_malloc_create" 00:08:26.838 }, 00:08:26.838 { 00:08:26.838 "method": "bdev_wait_for_examine" 00:08:26.838 } 00:08:26.838 ] 00:08:26.838 } 00:08:26.838 ] 00:08:26.838 } 00:08:26.838 [2024-07-23 04:04:19.970312] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:26.838 [2024-07-23 04:04:19.970428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77212 ] 00:08:26.838 [2024-07-23 04:04:20.093851] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.838 [2024-07-23 04:04:20.111366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.838 [2024-07-23 04:04:20.167599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.097 [2024-07-23 04:04:20.219468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.980  Copying: 233/512 [MB] (233 MBps) Copying: 466/512 [MB] (233 MBps) Copying: 512/512 [MB] (average 233 MBps) 00:08:29.980 00:08:29.980 04:04:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:29.980 04:04:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:29.980 04:04:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:29.980 04:04:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:30.239 [2024-07-23 04:04:23.337164] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:30.239 [2024-07-23 04:04:23.337261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77254 ] 00:08:30.239 { 00:08:30.239 "subsystems": [ 00:08:30.239 { 00:08:30.239 "subsystem": "bdev", 00:08:30.239 "config": [ 00:08:30.239 { 00:08:30.239 "params": { 00:08:30.239 "block_size": 512, 00:08:30.239 "num_blocks": 1048576, 00:08:30.239 "name": "malloc0" 00:08:30.239 }, 00:08:30.239 "method": "bdev_malloc_create" 00:08:30.239 }, 00:08:30.239 { 00:08:30.239 "params": { 00:08:30.239 "block_size": 512, 00:08:30.239 "num_blocks": 1048576, 00:08:30.239 "name": "malloc1" 00:08:30.239 }, 00:08:30.239 "method": "bdev_malloc_create" 00:08:30.239 }, 00:08:30.239 { 00:08:30.239 "method": "bdev_wait_for_examine" 00:08:30.239 } 00:08:30.239 ] 00:08:30.239 } 00:08:30.239 ] 00:08:30.239 } 00:08:30.239 [2024-07-23 04:04:23.458227] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.239 [2024-07-23 04:04:23.474045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.239 [2024-07-23 04:04:23.526087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.239 [2024-07-23 04:04:23.579931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.555  Copying: 235/512 [MB] (235 MBps) Copying: 471/512 [MB] (235 MBps) Copying: 512/512 [MB] (average 234 MBps) 00:08:33.555 00:08:33.555 00:08:33.555 real 0m6.727s 00:08:33.555 user 0m5.760s 00:08:33.555 sys 0m0.820s 00:08:33.555 04:04:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.555 04:04:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:33.555 ************************************ 00:08:33.555 END TEST dd_malloc_copy 00:08:33.555 ************************************ 00:08:33.555 04:04:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:08:33.555 00:08:33.555 real 0m6.881s 00:08:33.555 user 0m5.819s 00:08:33.555 sys 0m0.912s 00:08:33.555 04:04:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.555 04:04:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:33.555 ************************************ 00:08:33.555 END TEST spdk_dd_malloc 00:08:33.555 ************************************ 00:08:33.555 04:04:26 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:33.555 04:04:26 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:33.555 04:04:26 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:33.555 04:04:26 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.555 04:04:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:33.555 ************************************ 00:08:33.555 START TEST spdk_dd_bdev_to_bdev 00:08:33.555 ************************************ 00:08:33.555 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:33.555 * Looking for test storage... 00:08:33.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:33.555 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.555 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.555 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.555 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:33.556 ************************************ 00:08:33.556 START TEST dd_inflate_file 00:08:33.556 ************************************ 00:08:33.556 04:04:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:33.556 [2024-07-23 04:04:26.889306] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:33.556 [2024-07-23 04:04:26.889400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77358 ] 00:08:33.814 [2024-07-23 04:04:27.010599] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.814 [2024-07-23 04:04:27.027149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.814 [2024-07-23 04:04:27.092233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.814 [2024-07-23 04:04:27.143386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.072  Copying: 64/64 [MB] (average 1560 MBps) 00:08:34.072 00:08:34.330 00:08:34.330 real 0m0.581s 00:08:34.330 user 0m0.334s 00:08:34.330 sys 0m0.296s 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:34.330 ************************************ 00:08:34.330 END TEST dd_inflate_file 00:08:34.330 ************************************ 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:34.330 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.331 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:34.331 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:34.331 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:34.331 ************************************ 00:08:34.331 START TEST dd_copy_to_out_bdev 00:08:34.331 ************************************ 00:08:34.331 04:04:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:34.331 { 00:08:34.331 "subsystems": [ 00:08:34.331 { 00:08:34.331 "subsystem": "bdev", 00:08:34.331 "config": [ 00:08:34.331 { 00:08:34.331 "params": { 00:08:34.331 "trtype": "pcie", 00:08:34.331 "traddr": "0000:00:10.0", 00:08:34.331 "name": "Nvme0" 00:08:34.331 }, 00:08:34.331 "method": "bdev_nvme_attach_controller" 00:08:34.331 }, 00:08:34.331 { 00:08:34.331 "params": { 00:08:34.331 "trtype": "pcie", 00:08:34.331 "traddr": "0000:00:11.0", 00:08:34.331 "name": "Nvme1" 00:08:34.331 }, 00:08:34.331 "method": "bdev_nvme_attach_controller" 00:08:34.331 }, 00:08:34.331 { 00:08:34.331 "method": "bdev_wait_for_examine" 00:08:34.331 } 00:08:34.331 ] 00:08:34.331 } 00:08:34.331 ] 00:08:34.331 } 00:08:34.331 [2024-07-23 04:04:27.531759] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:34.331 [2024-07-23 04:04:27.531859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77391 ] 00:08:34.331 [2024-07-23 04:04:27.653214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:34.331 [2024-07-23 04:04:27.668396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.589 [2024-07-23 04:04:27.745989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.589 [2024-07-23 04:04:27.802102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.223  Copying: 50/64 [MB] (50 MBps) Copying: 64/64 [MB] (average 50 MBps) 00:08:36.223 00:08:36.223 00:08:36.223 real 0m2.011s 00:08:36.223 user 0m1.777s 00:08:36.223 sys 0m1.617s 00:08:36.223 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:36.224 ************************************ 00:08:36.224 END TEST dd_copy_to_out_bdev 00:08:36.224 ************************************ 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:36.224 ************************************ 00:08:36.224 START TEST dd_offset_magic 00:08:36.224 ************************************ 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:36.224 04:04:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:36.488 [2024-07-23 04:04:29.591654] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:36.488 [2024-07-23 04:04:29.591770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77437 ] 00:08:36.488 { 00:08:36.488 "subsystems": [ 00:08:36.488 { 00:08:36.488 "subsystem": "bdev", 00:08:36.488 "config": [ 00:08:36.488 { 00:08:36.488 "params": { 00:08:36.488 "trtype": "pcie", 00:08:36.488 "traddr": "0000:00:10.0", 00:08:36.488 "name": "Nvme0" 00:08:36.488 }, 00:08:36.488 "method": "bdev_nvme_attach_controller" 00:08:36.488 }, 00:08:36.488 { 00:08:36.488 "params": { 00:08:36.488 "trtype": "pcie", 00:08:36.488 "traddr": "0000:00:11.0", 00:08:36.488 "name": "Nvme1" 00:08:36.488 }, 00:08:36.488 "method": "bdev_nvme_attach_controller" 00:08:36.488 }, 00:08:36.488 { 00:08:36.488 "method": "bdev_wait_for_examine" 00:08:36.488 } 00:08:36.488 ] 00:08:36.488 } 00:08:36.488 ] 00:08:36.488 } 00:08:36.488 [2024-07-23 04:04:29.713529] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.488 [2024-07-23 04:04:29.734268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.488 [2024-07-23 04:04:29.800008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.775 [2024-07-23 04:04:29.858286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.046  Copying: 65/65 [MB] (average 802 MBps) 00:08:37.046 00:08:37.046 04:04:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:37.046 04:04:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:37.046 04:04:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:37.046 04:04:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:37.304 [2024-07-23 04:04:30.398068] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:37.304 [2024-07-23 04:04:30.398170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77457 ] 00:08:37.304 { 00:08:37.304 "subsystems": [ 00:08:37.304 { 00:08:37.304 "subsystem": "bdev", 00:08:37.304 "config": [ 00:08:37.304 { 00:08:37.304 "params": { 00:08:37.304 "trtype": "pcie", 00:08:37.304 "traddr": "0000:00:10.0", 00:08:37.304 "name": "Nvme0" 00:08:37.304 }, 00:08:37.304 "method": "bdev_nvme_attach_controller" 00:08:37.304 }, 00:08:37.304 { 00:08:37.304 "params": { 00:08:37.304 "trtype": "pcie", 00:08:37.304 "traddr": "0000:00:11.0", 00:08:37.304 "name": "Nvme1" 00:08:37.304 }, 00:08:37.305 "method": "bdev_nvme_attach_controller" 00:08:37.305 }, 00:08:37.305 { 00:08:37.305 "method": "bdev_wait_for_examine" 00:08:37.305 } 00:08:37.305 ] 00:08:37.305 } 00:08:37.305 ] 00:08:37.305 } 00:08:37.305 [2024-07-23 04:04:30.519650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:37.305 [2024-07-23 04:04:30.535768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.305 [2024-07-23 04:04:30.604243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.563 [2024-07-23 04:04:30.658061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.821  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:37.821 00:08:37.821 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:37.821 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:37.821 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:37.821 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:37.821 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:37.821 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:37.821 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:37.821 [2024-07-23 04:04:31.087136] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:37.822 [2024-07-23 04:04:31.087244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77479 ] 00:08:37.822 { 00:08:37.822 "subsystems": [ 00:08:37.822 { 00:08:37.822 "subsystem": "bdev", 00:08:37.822 "config": [ 00:08:37.822 { 00:08:37.822 "params": { 00:08:37.822 "trtype": "pcie", 00:08:37.822 "traddr": "0000:00:10.0", 00:08:37.822 "name": "Nvme0" 00:08:37.822 }, 00:08:37.822 "method": "bdev_nvme_attach_controller" 00:08:37.822 }, 00:08:37.822 { 00:08:37.822 "params": { 00:08:37.822 "trtype": "pcie", 00:08:37.822 "traddr": "0000:00:11.0", 00:08:37.822 "name": "Nvme1" 00:08:37.822 }, 00:08:37.822 "method": "bdev_nvme_attach_controller" 00:08:37.822 }, 00:08:37.822 { 00:08:37.822 "method": "bdev_wait_for_examine" 00:08:37.822 } 00:08:37.822 ] 00:08:37.822 } 00:08:37.822 ] 00:08:37.822 } 00:08:38.080 [2024-07-23 04:04:31.208853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.080 [2024-07-23 04:04:31.226709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.080 [2024-07-23 04:04:31.288921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.080 [2024-07-23 04:04:31.344049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:38.597  Copying: 65/65 [MB] (average 855 MBps) 00:08:38.597 00:08:38.597 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:38.597 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:38.597 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:38.597 04:04:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:38.597 { 00:08:38.597 "subsystems": [ 00:08:38.597 { 00:08:38.597 "subsystem": "bdev", 00:08:38.597 "config": [ 00:08:38.597 { 00:08:38.597 "params": { 00:08:38.597 "trtype": "pcie", 00:08:38.597 "traddr": "0000:00:10.0", 00:08:38.597 "name": "Nvme0" 00:08:38.597 }, 00:08:38.597 "method": "bdev_nvme_attach_controller" 00:08:38.597 }, 00:08:38.597 { 00:08:38.597 "params": { 00:08:38.597 "trtype": "pcie", 00:08:38.597 "traddr": "0000:00:11.0", 00:08:38.597 "name": "Nvme1" 00:08:38.597 }, 00:08:38.597 "method": "bdev_nvme_attach_controller" 00:08:38.597 }, 00:08:38.597 { 00:08:38.597 "method": "bdev_wait_for_examine" 00:08:38.597 } 00:08:38.597 ] 00:08:38.597 } 00:08:38.597 ] 00:08:38.598 } 00:08:38.598 [2024-07-23 04:04:31.911818] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:38.598 [2024-07-23 04:04:31.911931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77488 ] 00:08:38.856 [2024-07-23 04:04:32.035208] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.856 [2024-07-23 04:04:32.048632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.856 [2024-07-23 04:04:32.126542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.856 [2024-07-23 04:04:32.183303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:39.373  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:39.373 00:08:39.373 ************************************ 00:08:39.373 END TEST dd_offset_magic 00:08:39.373 ************************************ 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:39.373 00:08:39.373 real 0m3.018s 00:08:39.373 user 0m2.115s 00:08:39.373 sys 0m0.948s 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:39.373 04:04:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:39.373 [2024-07-23 04:04:32.659691] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:39.373 [2024-07-23 04:04:32.659794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77525 ] 00:08:39.373 { 00:08:39.373 "subsystems": [ 00:08:39.373 { 00:08:39.373 "subsystem": "bdev", 00:08:39.373 "config": [ 00:08:39.373 { 00:08:39.373 "params": { 00:08:39.373 "trtype": "pcie", 00:08:39.373 "traddr": "0000:00:10.0", 00:08:39.373 "name": "Nvme0" 00:08:39.373 }, 00:08:39.373 "method": "bdev_nvme_attach_controller" 00:08:39.373 }, 00:08:39.373 { 00:08:39.373 "params": { 00:08:39.373 "trtype": "pcie", 00:08:39.373 "traddr": "0000:00:11.0", 00:08:39.373 "name": "Nvme1" 00:08:39.373 }, 00:08:39.373 "method": "bdev_nvme_attach_controller" 00:08:39.373 }, 00:08:39.373 { 00:08:39.373 "method": "bdev_wait_for_examine" 00:08:39.373 } 00:08:39.373 ] 00:08:39.373 } 00:08:39.373 ] 00:08:39.373 } 00:08:39.631 [2024-07-23 04:04:32.783815] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.631 [2024-07-23 04:04:32.806174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.631 [2024-07-23 04:04:32.884217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.631 [2024-07-23 04:04:32.943162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:40.146  Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:40.146 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:40.146 04:04:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:40.146 [2024-07-23 04:04:33.368487] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:40.146 [2024-07-23 04:04:33.368566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77546 ] 00:08:40.146 { 00:08:40.146 "subsystems": [ 00:08:40.146 { 00:08:40.146 "subsystem": "bdev", 00:08:40.146 "config": [ 00:08:40.146 { 00:08:40.146 "params": { 00:08:40.146 "trtype": "pcie", 00:08:40.146 "traddr": "0000:00:10.0", 00:08:40.146 "name": "Nvme0" 00:08:40.146 }, 00:08:40.146 "method": "bdev_nvme_attach_controller" 00:08:40.146 }, 00:08:40.146 { 00:08:40.146 "params": { 00:08:40.146 "trtype": "pcie", 00:08:40.146 "traddr": "0000:00:11.0", 00:08:40.146 "name": "Nvme1" 00:08:40.146 }, 00:08:40.146 "method": "bdev_nvme_attach_controller" 00:08:40.146 }, 00:08:40.146 { 00:08:40.146 "method": "bdev_wait_for_examine" 00:08:40.146 } 00:08:40.146 ] 00:08:40.146 } 00:08:40.146 ] 00:08:40.146 } 00:08:40.146 [2024-07-23 04:04:33.485450] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:40.403 [2024-07-23 04:04:33.500081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.404 [2024-07-23 04:04:33.569792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.404 [2024-07-23 04:04:33.623386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:40.920  Copying: 5120/5120 [kB] (average 833 MBps) 00:08:40.920 00:08:40.920 04:04:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:40.920 00:08:40.920 real 0m7.293s 00:08:40.920 user 0m5.314s 00:08:40.920 sys 0m3.553s 00:08:40.920 04:04:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.920 ************************************ 00:08:40.920 END TEST spdk_dd_bdev_to_bdev 00:08:40.920 ************************************ 00:08:40.920 04:04:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:40.920 04:04:34 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:40.920 04:04:34 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:40.920 04:04:34 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:40.920 04:04:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:40.920 04:04:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.920 04:04:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:40.920 ************************************ 00:08:40.920 START TEST spdk_dd_uring 00:08:40.920 ************************************ 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:40.920 * Looking for test storage... 00:08:40.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:40.920 ************************************ 00:08:40.920 START TEST dd_uring_copy 00:08:40.920 ************************************ 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:40.920 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=5ujc0rs606cfxm96y6kpn23rgx2n46qaavspibm0efn8lcnoy3ksektnh5wozgzd6j02tq13gx6wypahbjrce8jmvn27u3x5sjcehoncdqkp4y2ojgqgc6r8sq2917kz6p2rzibbcgs221x8111uk03xheuj007faxobf6gvz7panbwqo66dnrgljr9q0ecrlw1vzigunwd6io6gj6mggluas1joz98jwvkjkh95pfz3vnv1y3ankq5q7i3tkpqdb9b8rfsy0pyz7shurbpsokc4qd9dby215hi39i3bt3hcw33pmi1ntnkm2k6pg128019czqjisrznmynzwc6i9091283s9fkelq45ohpzh2b5ua3ni464q358o4aw657ai6li0z46cjhv4li4rpbaru8wsop1ghm2s7ogx1v9loiavtuyq1tufovn0nicl89p38jstuseyog1zcozbf79yomvfvkih650eln2sc9p0hiy3v0c6qyzdvjnjttz3n30o3h11sk8sqffl2noa5kamwkkmhhs7shqrjf5nm60z2976zmpgcjv6cn4cgxg820qy48ds82xcafmbzyygql9awc95rmdiezyiyfaay4gpqqzus16i4oy0f70u3ze7yipou10w3ibonte7l7nubjw02xpo5dbz409rmj9n91lbgnm7iw5b7r5axl2mw9fx7v24no75489iizfzv5ocw9w29rzmz93dlfhp422lwwvm5vvzqox1zvl6l01o07d6xcvyqqrdw53a2wd07x07t4xdsxzyo18kc5tleeoovqdpxjdci3wmvr6918jtd7xzscns9f5l65rwet3jzmobwn4amzdri1prwvve767xwntre8jzvnp960n7ym0pat5skw31dx2otvdojj6kuuxr7jmj8i2m8b2gh4ycmjy6prkpz8yebi94e6c9lvul9xfwpxwbgl4wkhvpf2fatdui2z1x332udm9hmhl6ihvgx5ul99zj94cipd9dnl4dm0vc9i3 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 5ujc0rs606cfxm96y6kpn23rgx2n46qaavspibm0efn8lcnoy3ksektnh5wozgzd6j02tq13gx6wypahbjrce8jmvn27u3x5sjcehoncdqkp4y2ojgqgc6r8sq2917kz6p2rzibbcgs221x8111uk03xheuj007faxobf6gvz7panbwqo66dnrgljr9q0ecrlw1vzigunwd6io6gj6mggluas1joz98jwvkjkh95pfz3vnv1y3ankq5q7i3tkpqdb9b8rfsy0pyz7shurbpsokc4qd9dby215hi39i3bt3hcw33pmi1ntnkm2k6pg128019czqjisrznmynzwc6i9091283s9fkelq45ohpzh2b5ua3ni464q358o4aw657ai6li0z46cjhv4li4rpbaru8wsop1ghm2s7ogx1v9loiavtuyq1tufovn0nicl89p38jstuseyog1zcozbf79yomvfvkih650eln2sc9p0hiy3v0c6qyzdvjnjttz3n30o3h11sk8sqffl2noa5kamwkkmhhs7shqrjf5nm60z2976zmpgcjv6cn4cgxg820qy48ds82xcafmbzyygql9awc95rmdiezyiyfaay4gpqqzus16i4oy0f70u3ze7yipou10w3ibonte7l7nubjw02xpo5dbz409rmj9n91lbgnm7iw5b7r5axl2mw9fx7v24no75489iizfzv5ocw9w29rzmz93dlfhp422lwwvm5vvzqox1zvl6l01o07d6xcvyqqrdw53a2wd07x07t4xdsxzyo18kc5tleeoovqdpxjdci3wmvr6918jtd7xzscns9f5l65rwet3jzmobwn4amzdri1prwvve767xwntre8jzvnp960n7ym0pat5skw31dx2otvdojj6kuuxr7jmj8i2m8b2gh4ycmjy6prkpz8yebi94e6c9lvul9xfwpxwbgl4wkhvpf2fatdui2z1x332udm9hmhl6ihvgx5ul99zj94cipd9dnl4dm0vc9i3 00:08:40.921 04:04:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:41.179 [2024-07-23 04:04:34.262788] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:41.179 [2024-07-23 04:04:34.262909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77616 ] 00:08:41.179 [2024-07-23 04:04:34.383975] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.179 [2024-07-23 04:04:34.404753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.179 [2024-07-23 04:04:34.482285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.437 [2024-07-23 04:04:34.539419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.601  Copying: 511/511 [MB] (average 1019 MBps) 00:08:42.601 00:08:42.601 04:04:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:42.601 04:04:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:42.601 04:04:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:42.601 04:04:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:42.601 [2024-07-23 04:04:35.706617] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:42.601 [2024-07-23 04:04:35.706708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77632 ] 00:08:42.601 { 00:08:42.601 "subsystems": [ 00:08:42.601 { 00:08:42.601 "subsystem": "bdev", 00:08:42.601 "config": [ 00:08:42.601 { 00:08:42.601 "params": { 00:08:42.601 "block_size": 512, 00:08:42.601 "num_blocks": 1048576, 00:08:42.601 "name": "malloc0" 00:08:42.601 }, 00:08:42.601 "method": "bdev_malloc_create" 00:08:42.601 }, 00:08:42.601 { 00:08:42.601 "params": { 00:08:42.601 "filename": "/dev/zram1", 00:08:42.601 "name": "uring0" 00:08:42.601 }, 00:08:42.601 "method": "bdev_uring_create" 00:08:42.601 }, 00:08:42.601 { 00:08:42.601 "method": "bdev_wait_for_examine" 00:08:42.601 } 00:08:42.601 ] 00:08:42.601 } 00:08:42.601 ] 00:08:42.601 } 00:08:42.601 [2024-07-23 04:04:35.827713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:42.601 [2024-07-23 04:04:35.844997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.601 [2024-07-23 04:04:35.924544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.859 [2024-07-23 04:04:35.979271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.692  Copying: 217/512 [MB] (217 MBps) Copying: 432/512 [MB] (214 MBps) Copying: 512/512 [MB] (average 215 MBps) 00:08:45.692 00:08:45.692 04:04:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:45.693 04:04:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:45.693 04:04:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:45.693 04:04:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:45.693 [2024-07-23 04:04:38.969445] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:45.693 [2024-07-23 04:04:38.969522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77676 ] 00:08:45.693 { 00:08:45.693 "subsystems": [ 00:08:45.693 { 00:08:45.693 "subsystem": "bdev", 00:08:45.693 "config": [ 00:08:45.693 { 00:08:45.693 "params": { 00:08:45.693 "block_size": 512, 00:08:45.693 "num_blocks": 1048576, 00:08:45.693 "name": "malloc0" 00:08:45.693 }, 00:08:45.693 "method": "bdev_malloc_create" 00:08:45.693 }, 00:08:45.693 { 00:08:45.693 "params": { 00:08:45.693 "filename": "/dev/zram1", 00:08:45.693 "name": "uring0" 00:08:45.693 }, 00:08:45.693 "method": "bdev_uring_create" 00:08:45.693 }, 00:08:45.693 { 00:08:45.693 "method": "bdev_wait_for_examine" 00:08:45.693 } 00:08:45.693 ] 00:08:45.693 } 00:08:45.693 ] 00:08:45.693 } 00:08:45.951 [2024-07-23 04:04:39.086642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:45.951 [2024-07-23 04:04:39.104880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.951 [2024-07-23 04:04:39.188581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.951 [2024-07-23 04:04:39.244872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:49.457  Copying: 184/512 [MB] (184 MBps) Copying: 353/512 [MB] (168 MBps) Copying: 512/512 [MB] (average 174 MBps) 00:08:49.457 00:08:49.457 04:04:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:49.457 04:04:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 5ujc0rs606cfxm96y6kpn23rgx2n46qaavspibm0efn8lcnoy3ksektnh5wozgzd6j02tq13gx6wypahbjrce8jmvn27u3x5sjcehoncdqkp4y2ojgqgc6r8sq2917kz6p2rzibbcgs221x8111uk03xheuj007faxobf6gvz7panbwqo66dnrgljr9q0ecrlw1vzigunwd6io6gj6mggluas1joz98jwvkjkh95pfz3vnv1y3ankq5q7i3tkpqdb9b8rfsy0pyz7shurbpsokc4qd9dby215hi39i3bt3hcw33pmi1ntnkm2k6pg128019czqjisrznmynzwc6i9091283s9fkelq45ohpzh2b5ua3ni464q358o4aw657ai6li0z46cjhv4li4rpbaru8wsop1ghm2s7ogx1v9loiavtuyq1tufovn0nicl89p38jstuseyog1zcozbf79yomvfvkih650eln2sc9p0hiy3v0c6qyzdvjnjttz3n30o3h11sk8sqffl2noa5kamwkkmhhs7shqrjf5nm60z2976zmpgcjv6cn4cgxg820qy48ds82xcafmbzyygql9awc95rmdiezyiyfaay4gpqqzus16i4oy0f70u3ze7yipou10w3ibonte7l7nubjw02xpo5dbz409rmj9n91lbgnm7iw5b7r5axl2mw9fx7v24no75489iizfzv5ocw9w29rzmz93dlfhp422lwwvm5vvzqox1zvl6l01o07d6xcvyqqrdw53a2wd07x07t4xdsxzyo18kc5tleeoovqdpxjdci3wmvr6918jtd7xzscns9f5l65rwet3jzmobwn4amzdri1prwvve767xwntre8jzvnp960n7ym0pat5skw31dx2otvdojj6kuuxr7jmj8i2m8b2gh4ycmjy6prkpz8yebi94e6c9lvul9xfwpxwbgl4wkhvpf2fatdui2z1x332udm9hmhl6ihvgx5ul99zj94cipd9dnl4dm0vc9i3 == \5\u\j\c\0\r\s\6\0\6\c\f\x\m\9\6\y\6\k\p\n\2\3\r\g\x\2\n\4\6\q\a\a\v\s\p\i\b\m\0\e\f\n\8\l\c\n\o\y\3\k\s\e\k\t\n\h\5\w\o\z\g\z\d\6\j\0\2\t\q\1\3\g\x\6\w\y\p\a\h\b\j\r\c\e\8\j\m\v\n\2\7\u\3\x\5\s\j\c\e\h\o\n\c\d\q\k\p\4\y\2\o\j\g\q\g\c\6\r\8\s\q\2\9\1\7\k\z\6\p\2\r\z\i\b\b\c\g\s\2\2\1\x\8\1\1\1\u\k\0\3\x\h\e\u\j\0\0\7\f\a\x\o\b\f\6\g\v\z\7\p\a\n\b\w\q\o\6\6\d\n\r\g\l\j\r\9\q\0\e\c\r\l\w\1\v\z\i\g\u\n\w\d\6\i\o\6\g\j\6\m\g\g\l\u\a\s\1\j\o\z\9\8\j\w\v\k\j\k\h\9\5\p\f\z\3\v\n\v\1\y\3\a\n\k\q\5\q\7\i\3\t\k\p\q\d\b\9\b\8\r\f\s\y\0\p\y\z\7\s\h\u\r\b\p\s\o\k\c\4\q\d\9\d\b\y\2\1\5\h\i\3\9\i\3\b\t\3\h\c\w\3\3\p\m\i\1\n\t\n\k\m\2\k\6\p\g\1\2\8\0\1\9\c\z\q\j\i\s\r\z\n\m\y\n\z\w\c\6\i\9\0\9\1\2\8\3\s\9\f\k\e\l\q\4\5\o\h\p\z\h\2\b\5\u\a\3\n\i\4\6\4\q\3\5\8\o\4\a\w\6\5\7\a\i\6\l\i\0\z\4\6\c\j\h\v\4\l\i\4\r\p\b\a\r\u\8\w\s\o\p\1\g\h\m\2\s\7\o\g\x\1\v\9\l\o\i\a\v\t\u\y\q\1\t\u\f\o\v\n\0\n\i\c\l\8\9\p\3\8\j\s\t\u\s\e\y\o\g\1\z\c\o\z\b\f\7\9\y\o\m\v\f\v\k\i\h\6\5\0\e\l\n\2\s\c\9\p\0\h\i\y\3\v\0\c\6\q\y\z\d\v\j\n\j\t\t\z\3\n\3\0\o\3\h\1\1\s\k\8\s\q\f\f\l\2\n\o\a\5\k\a\m\w\k\k\m\h\h\s\7\s\h\q\r\j\f\5\n\m\6\0\z\2\9\7\6\z\m\p\g\c\j\v\6\c\n\4\c\g\x\g\8\2\0\q\y\4\8\d\s\8\2\x\c\a\f\m\b\z\y\y\g\q\l\9\a\w\c\9\5\r\m\d\i\e\z\y\i\y\f\a\a\y\4\g\p\q\q\z\u\s\1\6\i\4\o\y\0\f\7\0\u\3\z\e\7\y\i\p\o\u\1\0\w\3\i\b\o\n\t\e\7\l\7\n\u\b\j\w\0\2\x\p\o\5\d\b\z\4\0\9\r\m\j\9\n\9\1\l\b\g\n\m\7\i\w\5\b\7\r\5\a\x\l\2\m\w\9\f\x\7\v\2\4\n\o\7\5\4\8\9\i\i\z\f\z\v\5\o\c\w\9\w\2\9\r\z\m\z\9\3\d\l\f\h\p\4\2\2\l\w\w\v\m\5\v\v\z\q\o\x\1\z\v\l\6\l\0\1\o\0\7\d\6\x\c\v\y\q\q\r\d\w\5\3\a\2\w\d\0\7\x\0\7\t\4\x\d\s\x\z\y\o\1\8\k\c\5\t\l\e\e\o\o\v\q\d\p\x\j\d\c\i\3\w\m\v\r\6\9\1\8\j\t\d\7\x\z\s\c\n\s\9\f\5\l\6\5\r\w\e\t\3\j\z\m\o\b\w\n\4\a\m\z\d\r\i\1\p\r\w\v\v\e\7\6\7\x\w\n\t\r\e\8\j\z\v\n\p\9\6\0\n\7\y\m\0\p\a\t\5\s\k\w\3\1\d\x\2\o\t\v\d\o\j\j\6\k\u\u\x\r\7\j\m\j\8\i\2\m\8\b\2\g\h\4\y\c\m\j\y\6\p\r\k\p\z\8\y\e\b\i\9\4\e\6\c\9\l\v\u\l\9\x\f\w\p\x\w\b\g\l\4\w\k\h\v\p\f\2\f\a\t\d\u\i\2\z\1\x\3\3\2\u\d\m\9\h\m\h\l\6\i\h\v\g\x\5\u\l\9\9\z\j\9\4\c\i\p\d\9\d\n\l\4\d\m\0\v\c\9\i\3 ]] 00:08:49.457 04:04:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:49.457 04:04:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 5ujc0rs606cfxm96y6kpn23rgx2n46qaavspibm0efn8lcnoy3ksektnh5wozgzd6j02tq13gx6wypahbjrce8jmvn27u3x5sjcehoncdqkp4y2ojgqgc6r8sq2917kz6p2rzibbcgs221x8111uk03xheuj007faxobf6gvz7panbwqo66dnrgljr9q0ecrlw1vzigunwd6io6gj6mggluas1joz98jwvkjkh95pfz3vnv1y3ankq5q7i3tkpqdb9b8rfsy0pyz7shurbpsokc4qd9dby215hi39i3bt3hcw33pmi1ntnkm2k6pg128019czqjisrznmynzwc6i9091283s9fkelq45ohpzh2b5ua3ni464q358o4aw657ai6li0z46cjhv4li4rpbaru8wsop1ghm2s7ogx1v9loiavtuyq1tufovn0nicl89p38jstuseyog1zcozbf79yomvfvkih650eln2sc9p0hiy3v0c6qyzdvjnjttz3n30o3h11sk8sqffl2noa5kamwkkmhhs7shqrjf5nm60z2976zmpgcjv6cn4cgxg820qy48ds82xcafmbzyygql9awc95rmdiezyiyfaay4gpqqzus16i4oy0f70u3ze7yipou10w3ibonte7l7nubjw02xpo5dbz409rmj9n91lbgnm7iw5b7r5axl2mw9fx7v24no75489iizfzv5ocw9w29rzmz93dlfhp422lwwvm5vvzqox1zvl6l01o07d6xcvyqqrdw53a2wd07x07t4xdsxzyo18kc5tleeoovqdpxjdci3wmvr6918jtd7xzscns9f5l65rwet3jzmobwn4amzdri1prwvve767xwntre8jzvnp960n7ym0pat5skw31dx2otvdojj6kuuxr7jmj8i2m8b2gh4ycmjy6prkpz8yebi94e6c9lvul9xfwpxwbgl4wkhvpf2fatdui2z1x332udm9hmhl6ihvgx5ul99zj94cipd9dnl4dm0vc9i3 == \5\u\j\c\0\r\s\6\0\6\c\f\x\m\9\6\y\6\k\p\n\2\3\r\g\x\2\n\4\6\q\a\a\v\s\p\i\b\m\0\e\f\n\8\l\c\n\o\y\3\k\s\e\k\t\n\h\5\w\o\z\g\z\d\6\j\0\2\t\q\1\3\g\x\6\w\y\p\a\h\b\j\r\c\e\8\j\m\v\n\2\7\u\3\x\5\s\j\c\e\h\o\n\c\d\q\k\p\4\y\2\o\j\g\q\g\c\6\r\8\s\q\2\9\1\7\k\z\6\p\2\r\z\i\b\b\c\g\s\2\2\1\x\8\1\1\1\u\k\0\3\x\h\e\u\j\0\0\7\f\a\x\o\b\f\6\g\v\z\7\p\a\n\b\w\q\o\6\6\d\n\r\g\l\j\r\9\q\0\e\c\r\l\w\1\v\z\i\g\u\n\w\d\6\i\o\6\g\j\6\m\g\g\l\u\a\s\1\j\o\z\9\8\j\w\v\k\j\k\h\9\5\p\f\z\3\v\n\v\1\y\3\a\n\k\q\5\q\7\i\3\t\k\p\q\d\b\9\b\8\r\f\s\y\0\p\y\z\7\s\h\u\r\b\p\s\o\k\c\4\q\d\9\d\b\y\2\1\5\h\i\3\9\i\3\b\t\3\h\c\w\3\3\p\m\i\1\n\t\n\k\m\2\k\6\p\g\1\2\8\0\1\9\c\z\q\j\i\s\r\z\n\m\y\n\z\w\c\6\i\9\0\9\1\2\8\3\s\9\f\k\e\l\q\4\5\o\h\p\z\h\2\b\5\u\a\3\n\i\4\6\4\q\3\5\8\o\4\a\w\6\5\7\a\i\6\l\i\0\z\4\6\c\j\h\v\4\l\i\4\r\p\b\a\r\u\8\w\s\o\p\1\g\h\m\2\s\7\o\g\x\1\v\9\l\o\i\a\v\t\u\y\q\1\t\u\f\o\v\n\0\n\i\c\l\8\9\p\3\8\j\s\t\u\s\e\y\o\g\1\z\c\o\z\b\f\7\9\y\o\m\v\f\v\k\i\h\6\5\0\e\l\n\2\s\c\9\p\0\h\i\y\3\v\0\c\6\q\y\z\d\v\j\n\j\t\t\z\3\n\3\0\o\3\h\1\1\s\k\8\s\q\f\f\l\2\n\o\a\5\k\a\m\w\k\k\m\h\h\s\7\s\h\q\r\j\f\5\n\m\6\0\z\2\9\7\6\z\m\p\g\c\j\v\6\c\n\4\c\g\x\g\8\2\0\q\y\4\8\d\s\8\2\x\c\a\f\m\b\z\y\y\g\q\l\9\a\w\c\9\5\r\m\d\i\e\z\y\i\y\f\a\a\y\4\g\p\q\q\z\u\s\1\6\i\4\o\y\0\f\7\0\u\3\z\e\7\y\i\p\o\u\1\0\w\3\i\b\o\n\t\e\7\l\7\n\u\b\j\w\0\2\x\p\o\5\d\b\z\4\0\9\r\m\j\9\n\9\1\l\b\g\n\m\7\i\w\5\b\7\r\5\a\x\l\2\m\w\9\f\x\7\v\2\4\n\o\7\5\4\8\9\i\i\z\f\z\v\5\o\c\w\9\w\2\9\r\z\m\z\9\3\d\l\f\h\p\4\2\2\l\w\w\v\m\5\v\v\z\q\o\x\1\z\v\l\6\l\0\1\o\0\7\d\6\x\c\v\y\q\q\r\d\w\5\3\a\2\w\d\0\7\x\0\7\t\4\x\d\s\x\z\y\o\1\8\k\c\5\t\l\e\e\o\o\v\q\d\p\x\j\d\c\i\3\w\m\v\r\6\9\1\8\j\t\d\7\x\z\s\c\n\s\9\f\5\l\6\5\r\w\e\t\3\j\z\m\o\b\w\n\4\a\m\z\d\r\i\1\p\r\w\v\v\e\7\6\7\x\w\n\t\r\e\8\j\z\v\n\p\9\6\0\n\7\y\m\0\p\a\t\5\s\k\w\3\1\d\x\2\o\t\v\d\o\j\j\6\k\u\u\x\r\7\j\m\j\8\i\2\m\8\b\2\g\h\4\y\c\m\j\y\6\p\r\k\p\z\8\y\e\b\i\9\4\e\6\c\9\l\v\u\l\9\x\f\w\p\x\w\b\g\l\4\w\k\h\v\p\f\2\f\a\t\d\u\i\2\z\1\x\3\3\2\u\d\m\9\h\m\h\l\6\i\h\v\g\x\5\u\l\9\9\z\j\9\4\c\i\p\d\9\d\n\l\4\d\m\0\v\c\9\i\3 ]] 00:08:49.457 04:04:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:50.024 04:04:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:50.024 04:04:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:50.024 04:04:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:50.024 04:04:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:50.024 [2024-07-23 04:04:43.242304] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:50.024 [2024-07-23 04:04:43.242411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77751 ] 00:08:50.024 { 00:08:50.024 "subsystems": [ 00:08:50.024 { 00:08:50.024 "subsystem": "bdev", 00:08:50.024 "config": [ 00:08:50.024 { 00:08:50.024 "params": { 00:08:50.024 "block_size": 512, 00:08:50.024 "num_blocks": 1048576, 00:08:50.024 "name": "malloc0" 00:08:50.024 }, 00:08:50.024 "method": "bdev_malloc_create" 00:08:50.024 }, 00:08:50.024 { 00:08:50.024 "params": { 00:08:50.024 "filename": "/dev/zram1", 00:08:50.024 "name": "uring0" 00:08:50.024 }, 00:08:50.024 "method": "bdev_uring_create" 00:08:50.024 }, 00:08:50.024 { 00:08:50.024 "method": "bdev_wait_for_examine" 00:08:50.024 } 00:08:50.024 ] 00:08:50.024 } 00:08:50.024 ] 00:08:50.024 } 00:08:50.024 [2024-07-23 04:04:43.360159] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:50.287 [2024-07-23 04:04:43.379091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.288 [2024-07-23 04:04:43.448931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.288 [2024-07-23 04:04:43.501858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.812  Copying: 165/512 [MB] (165 MBps) Copying: 336/512 [MB] (170 MBps) Copying: 505/512 [MB] (168 MBps) Copying: 512/512 [MB] (average 168 MBps) 00:08:53.812 00:08:53.812 04:04:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:53.812 04:04:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:53.812 04:04:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:53.812 04:04:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:53.812 04:04:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:53.812 04:04:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:53.812 04:04:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:53.812 04:04:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:54.070 [2024-07-23 04:04:47.158087] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:54.070 [2024-07-23 04:04:47.158177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77807 ] 00:08:54.070 { 00:08:54.070 "subsystems": [ 00:08:54.070 { 00:08:54.070 "subsystem": "bdev", 00:08:54.070 "config": [ 00:08:54.070 { 00:08:54.070 "params": { 00:08:54.070 "block_size": 512, 00:08:54.070 "num_blocks": 1048576, 00:08:54.070 "name": "malloc0" 00:08:54.070 }, 00:08:54.070 "method": "bdev_malloc_create" 00:08:54.070 }, 00:08:54.070 { 00:08:54.070 "params": { 00:08:54.070 "filename": "/dev/zram1", 00:08:54.070 "name": "uring0" 00:08:54.070 }, 00:08:54.070 "method": "bdev_uring_create" 00:08:54.070 }, 00:08:54.070 { 00:08:54.070 "params": { 00:08:54.070 "name": "uring0" 00:08:54.070 }, 00:08:54.070 "method": "bdev_uring_delete" 00:08:54.070 }, 00:08:54.070 { 00:08:54.070 "method": "bdev_wait_for_examine" 00:08:54.070 } 00:08:54.070 ] 00:08:54.070 } 00:08:54.070 ] 00:08:54.070 } 00:08:54.070 [2024-07-23 04:04:47.280403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:54.070 [2024-07-23 04:04:47.302224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.070 [2024-07-23 04:04:47.392196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.327 [2024-07-23 04:04:47.456635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:54.843  Copying: 0/0 [B] (average 0 Bps) 00:08:54.843 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.843 04:04:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:54.843 [2024-07-23 04:04:48.149764] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:54.843 [2024-07-23 04:04:48.149935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77838 ] 00:08:54.843 { 00:08:54.843 "subsystems": [ 00:08:54.843 { 00:08:54.843 "subsystem": "bdev", 00:08:54.843 "config": [ 00:08:54.843 { 00:08:54.843 "params": { 00:08:54.843 "block_size": 512, 00:08:54.843 "num_blocks": 1048576, 00:08:54.843 "name": "malloc0" 00:08:54.843 }, 00:08:54.843 "method": "bdev_malloc_create" 00:08:54.843 }, 00:08:54.843 { 00:08:54.843 "params": { 00:08:54.843 "filename": "/dev/zram1", 00:08:54.843 "name": "uring0" 00:08:54.844 }, 00:08:54.844 "method": "bdev_uring_create" 00:08:54.844 }, 00:08:54.844 { 00:08:54.844 "params": { 00:08:54.844 "name": "uring0" 00:08:54.844 }, 00:08:54.844 "method": "bdev_uring_delete" 00:08:54.844 }, 00:08:54.844 { 00:08:54.844 "method": "bdev_wait_for_examine" 00:08:54.844 } 00:08:54.844 ] 00:08:54.844 } 00:08:54.844 ] 00:08:54.844 } 00:08:55.102 [2024-07-23 04:04:48.272597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:55.102 [2024-07-23 04:04:48.289589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.102 [2024-07-23 04:04:48.367448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.102 [2024-07-23 04:04:48.424965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:55.360 [2024-07-23 04:04:48.632534] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:55.360 [2024-07-23 04:04:48.632638] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:55.360 [2024-07-23 04:04:48.632651] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:55.360 [2024-07-23 04:04:48.632661] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.618 [2024-07-23 04:04:48.955080] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:55.876 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:56.134 00:08:56.134 real 0m15.122s 00:08:56.134 user 0m10.081s 00:08:56.134 sys 0m12.791s 00:08:56.134 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.134 04:04:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:56.134 ************************************ 00:08:56.134 END TEST dd_uring_copy 00:08:56.134 ************************************ 00:08:56.134 04:04:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:08:56.134 00:08:56.134 real 0m15.266s 00:08:56.134 user 0m10.132s 00:08:56.134 sys 0m12.884s 00:08:56.134 04:04:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.134 04:04:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:56.134 ************************************ 00:08:56.134 END TEST spdk_dd_uring 00:08:56.134 ************************************ 00:08:56.134 04:04:49 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:56.134 04:04:49 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:56.134 04:04:49 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:56.134 04:04:49 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.134 04:04:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:56.134 ************************************ 00:08:56.134 START TEST spdk_dd_sparse 00:08:56.134 ************************************ 00:08:56.134 04:04:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:56.392 * Looking for test storage... 00:08:56.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:56.392 04:04:49 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.392 04:04:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.392 04:04:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.392 04:04:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.392 04:04:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.392 04:04:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.392 04:04:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.392 04:04:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:56.393 1+0 records in 00:08:56.393 1+0 records out 00:08:56.393 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00490223 s, 856 MB/s 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:56.393 1+0 records in 00:08:56.393 1+0 records out 00:08:56.393 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00687344 s, 610 MB/s 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:56.393 1+0 records in 00:08:56.393 1+0 records out 00:08:56.393 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00408191 s, 1.0 GB/s 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:56.393 ************************************ 00:08:56.393 START TEST dd_sparse_file_to_file 00:08:56.393 ************************************ 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:56.393 04:04:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:56.393 [2024-07-23 04:04:49.584453] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:56.393 [2024-07-23 04:04:49.584544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77931 ] 00:08:56.393 { 00:08:56.393 "subsystems": [ 00:08:56.393 { 00:08:56.393 "subsystem": "bdev", 00:08:56.393 "config": [ 00:08:56.393 { 00:08:56.393 "params": { 00:08:56.393 "block_size": 4096, 00:08:56.393 "filename": "dd_sparse_aio_disk", 00:08:56.393 "name": "dd_aio" 00:08:56.393 }, 00:08:56.393 "method": "bdev_aio_create" 00:08:56.393 }, 00:08:56.393 { 00:08:56.393 "params": { 00:08:56.393 "lvs_name": "dd_lvstore", 00:08:56.393 "bdev_name": "dd_aio" 00:08:56.393 }, 00:08:56.393 "method": "bdev_lvol_create_lvstore" 00:08:56.393 }, 00:08:56.393 { 00:08:56.393 "method": "bdev_wait_for_examine" 00:08:56.393 } 00:08:56.393 ] 00:08:56.393 } 00:08:56.393 ] 00:08:56.393 } 00:08:56.393 [2024-07-23 04:04:49.705519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:56.393 [2024-07-23 04:04:49.724474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.651 [2024-07-23 04:04:49.786042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.651 [2024-07-23 04:04:49.841844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.908  Copying: 12/36 [MB] (average 923 MBps) 00:08:56.908 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:56.908 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:56.908 00:08:56.908 real 0m0.662s 00:08:56.908 user 0m0.394s 00:08:56.908 sys 0m0.362s 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:56.909 ************************************ 00:08:56.909 END TEST dd_sparse_file_to_file 00:08:56.909 ************************************ 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:56.909 ************************************ 00:08:56.909 START TEST dd_sparse_file_to_bdev 00:08:56.909 ************************************ 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:56.909 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:57.167 [2024-07-23 04:04:50.298279] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:57.167 [2024-07-23 04:04:50.298386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77973 ] 00:08:57.167 { 00:08:57.167 "subsystems": [ 00:08:57.167 { 00:08:57.167 "subsystem": "bdev", 00:08:57.167 "config": [ 00:08:57.167 { 00:08:57.167 "params": { 00:08:57.167 "block_size": 4096, 00:08:57.167 "filename": "dd_sparse_aio_disk", 00:08:57.167 "name": "dd_aio" 00:08:57.167 }, 00:08:57.167 "method": "bdev_aio_create" 00:08:57.167 }, 00:08:57.167 { 00:08:57.167 "params": { 00:08:57.167 "lvs_name": "dd_lvstore", 00:08:57.167 "lvol_name": "dd_lvol", 00:08:57.167 "size_in_mib": 36, 00:08:57.167 "thin_provision": true 00:08:57.167 }, 00:08:57.167 "method": "bdev_lvol_create" 00:08:57.167 }, 00:08:57.167 { 00:08:57.167 "method": "bdev_wait_for_examine" 00:08:57.167 } 00:08:57.167 ] 00:08:57.167 } 00:08:57.167 ] 00:08:57.167 } 00:08:57.167 [2024-07-23 04:04:50.419561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:57.167 [2024-07-23 04:04:50.434015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.167 [2024-07-23 04:04:50.494829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.426 [2024-07-23 04:04:50.550408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.684  Copying: 12/36 [MB] (average 428 MBps) 00:08:57.684 00:08:57.684 00:08:57.684 real 0m0.624s 00:08:57.684 user 0m0.406s 00:08:57.684 sys 0m0.331s 00:08:57.684 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.684 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:57.684 ************************************ 00:08:57.684 END TEST dd_sparse_file_to_bdev 00:08:57.684 ************************************ 00:08:57.684 04:04:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:57.684 04:04:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:57.685 ************************************ 00:08:57.685 START TEST dd_sparse_bdev_to_file 00:08:57.685 ************************************ 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:57.685 04:04:50 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:57.685 [2024-07-23 04:04:50.969788] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:57.685 [2024-07-23 04:04:50.969886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78006 ] 00:08:57.685 { 00:08:57.685 "subsystems": [ 00:08:57.685 { 00:08:57.685 "subsystem": "bdev", 00:08:57.685 "config": [ 00:08:57.685 { 00:08:57.685 "params": { 00:08:57.685 "block_size": 4096, 00:08:57.685 "filename": "dd_sparse_aio_disk", 00:08:57.685 "name": "dd_aio" 00:08:57.685 }, 00:08:57.685 "method": "bdev_aio_create" 00:08:57.685 }, 00:08:57.685 { 00:08:57.685 "method": "bdev_wait_for_examine" 00:08:57.685 } 00:08:57.685 ] 00:08:57.685 } 00:08:57.685 ] 00:08:57.685 } 00:08:57.943 [2024-07-23 04:04:51.087091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:57.944 [2024-07-23 04:04:51.103621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.944 [2024-07-23 04:04:51.162479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.944 [2024-07-23 04:04:51.219531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.202  Copying: 12/36 [MB] (average 923 MBps) 00:08:58.202 00:08:58.202 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:58.202 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:58.202 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:58.461 00:08:58.461 real 0m0.637s 00:08:58.461 user 0m0.395s 00:08:58.461 sys 0m0.351s 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:58.461 ************************************ 00:08:58.461 END TEST dd_sparse_bdev_to_file 00:08:58.461 ************************************ 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:58.461 00:08:58.461 real 0m2.222s 00:08:58.461 user 0m1.298s 00:08:58.461 sys 0m1.227s 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.461 ************************************ 00:08:58.461 END TEST spdk_dd_sparse 00:08:58.461 04:04:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:58.461 ************************************ 00:08:58.461 04:04:51 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:58.461 04:04:51 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:58.461 04:04:51 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.461 04:04:51 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.461 04:04:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:58.461 ************************************ 00:08:58.461 START TEST spdk_dd_negative 00:08:58.461 ************************************ 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:58.461 * Looking for test storage... 00:08:58.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.461 ************************************ 00:08:58.461 START TEST dd_invalid_arguments 00:08:58.461 ************************************ 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.461 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.462 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.462 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.462 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.462 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:58.721 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:58.721 00:08:58.721 CPU options: 00:08:58.721 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:58.721 (like [0,1,10]) 00:08:58.721 --lcores lcore to CPU mapping list. The list is in the format: 00:08:58.721 [<,lcores[@CPUs]>...] 00:08:58.721 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:58.721 Within the group, '-' is used for range separator, 00:08:58.721 ',' is used for single number separator. 00:08:58.721 '( )' can be omitted for single element group, 00:08:58.721 '@' can be omitted if cpus and lcores have the same value 00:08:58.721 --disable-cpumask-locks Disable CPU core lock files. 00:08:58.721 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:58.721 pollers in the app support interrupt mode) 00:08:58.721 -p, --main-core main (primary) core for DPDK 00:08:58.721 00:08:58.721 Configuration options: 00:08:58.721 -c, --config, --json JSON config file 00:08:58.721 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:58.721 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:58.721 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:58.721 --rpcs-allowed comma-separated list of permitted RPCS 00:08:58.721 --json-ignore-init-errors don't exit on invalid config entry 00:08:58.721 00:08:58.721 Memory options: 00:08:58.721 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:58.721 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:58.721 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:58.721 -R, --huge-unlink unlink huge files after initialization 00:08:58.721 -n, --mem-channels number of memory channels used for DPDK 00:08:58.721 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:58.721 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:58.721 --no-huge run without using hugepages 00:08:58.721 -i, --shm-id shared memory ID (optional) 00:08:58.721 -g, --single-file-segments force creating just one hugetlbfs file 00:08:58.721 00:08:58.721 PCI options: 00:08:58.721 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:58.721 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:58.721 -u, --no-pci disable PCI access 00:08:58.721 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:58.721 00:08:58.721 Log options: 00:08:58.721 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:58.721 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:58.721 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:58.721 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:58.721 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:58.721 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:58.721 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:58.721 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:58.721 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:58.721 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:58.721 virtio_vfio_user, vmd) 00:08:58.721 --silence-noticelog disable notice level logging to stderr 00:08:58.721 00:08:58.721 Trace options: 00:08:58.721 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:58.721 setting 0 to disable trace (default 32768) 00:08:58.721 Tracepoints vary in size and can use more than one trace entry. 00:08:58.721 -e, --tpoint-group [:] 00:08:58.721 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:58.721 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:58.721 [2024-07-23 04:04:51.837719] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:58.721 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:58.721 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:58.721 a tracepoint group. First tpoint inside a group can be enabled by 00:08:58.721 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:58.721 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:58.721 in /include/spdk_internal/trace_defs.h 00:08:58.721 00:08:58.721 Other options: 00:08:58.721 -h, --help show this usage 00:08:58.721 -v, --version print SPDK version 00:08:58.721 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:58.721 --env-context Opaque context for use of the env implementation 00:08:58.721 00:08:58.721 Application specific: 00:08:58.721 [--------- DD Options ---------] 00:08:58.721 --if Input file. Must specify either --if or --ib. 00:08:58.721 --ib Input bdev. Must specifier either --if or --ib 00:08:58.721 --of Output file. Must specify either --of or --ob. 00:08:58.721 --ob Output bdev. Must specify either --of or --ob. 00:08:58.721 --iflag Input file flags. 00:08:58.721 --oflag Output file flags. 00:08:58.721 --bs I/O unit size (default: 4096) 00:08:58.721 --qd Queue depth (default: 2) 00:08:58.721 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:58.721 --skip Skip this many I/O units at start of input. (default: 0) 00:08:58.721 --seek Skip this many I/O units at start of output. (default: 0) 00:08:58.721 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:58.721 --sparse Enable hole skipping in input target 00:08:58.721 Available iflag and oflag values: 00:08:58.721 append - append mode 00:08:58.721 direct - use direct I/O for data 00:08:58.721 directory - fail unless a directory 00:08:58.721 dsync - use synchronized I/O for data 00:08:58.722 noatime - do not update access time 00:08:58.722 noctty - do not assign controlling terminal from file 00:08:58.722 nofollow - do not follow symlinks 00:08:58.722 nonblock - use non-blocking I/O 00:08:58.722 sync - use synchronized I/O for data and metadata 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.722 00:08:58.722 real 0m0.078s 00:08:58.722 user 0m0.047s 00:08:58.722 sys 0m0.027s 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:58.722 ************************************ 00:08:58.722 END TEST dd_invalid_arguments 00:08:58.722 ************************************ 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.722 ************************************ 00:08:58.722 START TEST dd_double_input 00:08:58.722 ************************************ 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:58.722 [2024-07-23 04:04:51.967682] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.722 00:08:58.722 real 0m0.077s 00:08:58.722 user 0m0.046s 00:08:58.722 sys 0m0.028s 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.722 04:04:51 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:58.722 ************************************ 00:08:58.722 END TEST dd_double_input 00:08:58.722 ************************************ 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.722 ************************************ 00:08:58.722 START TEST dd_double_output 00:08:58.722 ************************************ 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.722 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:58.983 [2024-07-23 04:04:52.081811] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.983 00:08:58.983 real 0m0.054s 00:08:58.983 user 0m0.028s 00:08:58.983 sys 0m0.024s 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:58.983 ************************************ 00:08:58.983 END TEST dd_double_output 00:08:58.983 ************************************ 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.983 ************************************ 00:08:58.983 START TEST dd_no_input 00:08:58.983 ************************************ 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:58.983 [2024-07-23 04:04:52.202171] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.983 00:08:58.983 real 0m0.073s 00:08:58.983 user 0m0.041s 00:08:58.983 sys 0m0.030s 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:58.983 ************************************ 00:08:58.983 END TEST dd_no_input 00:08:58.983 ************************************ 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.983 ************************************ 00:08:58.983 START TEST dd_no_output 00:08:58.983 ************************************ 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.983 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:58.984 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:58.984 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:59.246 [2024-07-23 04:04:52.331952] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:59.247 00:08:59.247 real 0m0.075s 00:08:59.247 user 0m0.048s 00:08:59.247 sys 0m0.024s 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:59.247 ************************************ 00:08:59.247 END TEST dd_no_output 00:08:59.247 ************************************ 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:59.247 ************************************ 00:08:59.247 START TEST dd_wrong_blocksize 00:08:59.247 ************************************ 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:59.247 [2024-07-23 04:04:52.451207] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:59.247 00:08:59.247 real 0m0.069s 00:08:59.247 user 0m0.047s 00:08:59.247 sys 0m0.021s 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:59.247 ************************************ 00:08:59.247 END TEST dd_wrong_blocksize 00:08:59.247 ************************************ 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:59.247 ************************************ 00:08:59.247 START TEST dd_smaller_blocksize 00:08:59.247 ************************************ 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.247 04:04:52 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:59.247 [2024-07-23 04:04:52.572241] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:08:59.247 [2024-07-23 04:04:52.572320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78230 ] 00:08:59.506 [2024-07-23 04:04:52.693088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.506 [2024-07-23 04:04:52.714145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.506 [2024-07-23 04:04:52.796245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.765 [2024-07-23 04:04:52.853518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:59.765 [2024-07-23 04:04:52.885310] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:59.765 [2024-07-23 04:04:52.885376] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:59.765 [2024-07-23 04:04:53.000069] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:59.765 04:04:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:08:59.765 04:04:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:59.765 04:04:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:08:59.765 04:04:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:08:59.765 04:04:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:08:59.765 04:04:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:59.765 00:08:59.765 real 0m0.588s 00:08:59.765 user 0m0.329s 00:08:59.765 sys 0m0.153s 00:08:59.765 04:04:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.765 04:04:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:59.765 ************************************ 00:08:59.765 END TEST dd_smaller_blocksize 00:09:00.024 ************************************ 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.024 ************************************ 00:09:00.024 START TEST dd_invalid_count 00:09:00.024 ************************************ 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:00.024 [2024-07-23 04:04:53.219832] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.024 00:09:00.024 real 0m0.075s 00:09:00.024 user 0m0.050s 00:09:00.024 sys 0m0.024s 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:00.024 ************************************ 00:09:00.024 END TEST dd_invalid_count 00:09:00.024 ************************************ 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.024 ************************************ 00:09:00.024 START TEST dd_invalid_oflag 00:09:00.024 ************************************ 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.024 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:00.024 [2024-07-23 04:04:53.350727] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.283 00:09:00.283 real 0m0.077s 00:09:00.283 user 0m0.050s 00:09:00.283 sys 0m0.025s 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:00.283 ************************************ 00:09:00.283 END TEST dd_invalid_oflag 00:09:00.283 ************************************ 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.283 ************************************ 00:09:00.283 START TEST dd_invalid_iflag 00:09:00.283 ************************************ 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:00.283 [2024-07-23 04:04:53.475858] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.283 00:09:00.283 real 0m0.072s 00:09:00.283 user 0m0.050s 00:09:00.283 sys 0m0.021s 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.283 ************************************ 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:00.283 END TEST dd_invalid_iflag 00:09:00.283 ************************************ 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.283 ************************************ 00:09:00.283 START TEST dd_unknown_flag 00:09:00.283 ************************************ 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.283 04:04:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:00.283 [2024-07-23 04:04:53.606416] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:00.283 [2024-07-23 04:04:53.606501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78322 ] 00:09:00.542 [2024-07-23 04:04:53.729216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.542 [2024-07-23 04:04:53.749365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.542 [2024-07-23 04:04:53.829355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.800 [2024-07-23 04:04:53.887418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.800 [2024-07-23 04:04:53.918676] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:00.800 [2024-07-23 04:04:53.918734] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.800 [2024-07-23 04:04:53.918790] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:00.800 [2024-07-23 04:04:53.918804] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.800 [2024-07-23 04:04:53.919083] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:00.800 [2024-07-23 04:04:53.919101] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.800 [2024-07-23 04:04:53.919156] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:00.800 [2024-07-23 04:04:53.919167] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:00.800 [2024-07-23 04:04:54.035711] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:00.800 ************************************ 00:09:00.800 END TEST dd_unknown_flag 00:09:00.800 ************************************ 00:09:00.800 04:04:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:09:00.800 04:04:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.800 04:04:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:09:00.800 04:04:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:09:00.800 04:04:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:09:00.800 04:04:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.800 00:09:00.800 real 0m0.580s 00:09:00.800 user 0m0.314s 00:09:00.800 sys 0m0.172s 00:09:00.800 04:04:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.800 04:04:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:01.059 ************************************ 00:09:01.059 START TEST dd_invalid_json 00:09:01.059 ************************************ 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.059 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:01.059 [2024-07-23 04:04:54.239679] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:01.059 [2024-07-23 04:04:54.239766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78345 ] 00:09:01.059 [2024-07-23 04:04:54.361824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:01.059 [2024-07-23 04:04:54.384795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.318 [2024-07-23 04:04:54.478413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.318 [2024-07-23 04:04:54.478490] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:01.318 [2024-07-23 04:04:54.478509] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:01.318 [2024-07-23 04:04:54.478520] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.318 [2024-07-23 04:04:54.478566] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.318 00:09:01.318 real 0m0.386s 00:09:01.318 user 0m0.201s 00:09:01.318 sys 0m0.083s 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:01.318 ************************************ 00:09:01.318 END TEST dd_invalid_json 00:09:01.318 ************************************ 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:09:01.318 00:09:01.318 real 0m2.935s 00:09:01.318 user 0m1.470s 00:09:01.318 sys 0m1.086s 00:09:01.318 ************************************ 00:09:01.318 END TEST spdk_dd_negative 00:09:01.318 ************************************ 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.318 04:04:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:01.318 04:04:54 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:01.318 ************************************ 00:09:01.318 END TEST spdk_dd 00:09:01.318 ************************************ 00:09:01.318 00:09:01.318 real 1m14.554s 00:09:01.318 user 0m47.199s 00:09:01.318 sys 0m33.846s 00:09:01.318 04:04:54 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.318 04:04:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 04:04:54 -- common/autotest_common.sh@1142 -- # return 0 00:09:01.577 04:04:54 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:09:01.577 04:04:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:01.577 04:04:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:01.577 04:04:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.577 04:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 04:04:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:01.577 04:04:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:01.577 04:04:54 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:09:01.577 04:04:54 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:09:01.577 04:04:54 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:09:01.577 04:04:54 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:09:01.577 04:04:54 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:01.577 04:04:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.577 04:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.577 04:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 ************************************ 00:09:01.577 START TEST nvmf_tcp 00:09:01.577 ************************************ 00:09:01.577 04:04:54 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:01.577 * Looking for test storage... 00:09:01.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:01.577 04:04:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:01.577 04:04:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:01.577 04:04:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:01.577 04:04:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.577 04:04:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.577 04:04:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 ************************************ 00:09:01.577 START TEST nvmf_target_core 00:09:01.577 ************************************ 00:09:01.577 04:04:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:01.577 * Looking for test storage... 00:09:01.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.838 ************************************ 00:09:01.838 START TEST nvmf_host_management 00:09:01.838 ************************************ 00:09:01.838 04:04:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:01.838 * Looking for test storage... 00:09:01.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.838 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:01.839 Cannot find device "nvmf_init_br" 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:01.839 Cannot find device "nvmf_tgt_br" 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.839 Cannot find device "nvmf_tgt_br2" 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:01.839 Cannot find device "nvmf_init_br" 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:01.839 Cannot find device "nvmf_tgt_br" 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:01.839 Cannot find device "nvmf_tgt_br2" 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:01.839 Cannot find device "nvmf_br" 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:01.839 Cannot find device "nvmf_init_if" 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:01.839 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:02.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:09:02.099 00:09:02.099 --- 10.0.0.2 ping statistics --- 00:09:02.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.099 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:02.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:02.099 00:09:02.099 --- 10.0.0.3 ping statistics --- 00:09:02.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.099 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:02.099 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:02.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:02.099 00:09:02.099 --- 10.0.0.1 ping statistics --- 00:09:02.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.099 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=78629 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 78629 00:09:02.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 78629 ']' 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.358 04:04:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.358 [2024-07-23 04:04:55.530221] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:02.358 [2024-07-23 04:04:55.530344] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.358 [2024-07-23 04:04:55.655738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.358 [2024-07-23 04:04:55.676101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.617 [2024-07-23 04:04:55.744854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.617 [2024-07-23 04:04:55.745217] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.617 [2024-07-23 04:04:55.745431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.617 [2024-07-23 04:04:55.745615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.617 [2024-07-23 04:04:55.745667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.617 [2024-07-23 04:04:55.746284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.617 [2024-07-23 04:04:55.746422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.617 [2024-07-23 04:04:55.746557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:02.617 [2024-07-23 04:04:55.746563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.617 [2024-07-23 04:04:55.805247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.184 [2024-07-23 04:04:56.483863] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.184 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.443 Malloc0 00:09:03.443 [2024-07-23 04:04:56.559043] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=78690 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 78690 /var/tmp/bdevperf.sock 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 78690 ']' 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.443 { 00:09:03.443 "params": { 00:09:03.443 "name": "Nvme$subsystem", 00:09:03.443 "trtype": "$TEST_TRANSPORT", 00:09:03.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.443 "adrfam": "ipv4", 00:09:03.443 "trsvcid": "$NVMF_PORT", 00:09:03.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.443 "hdgst": ${hdgst:-false}, 00:09:03.443 "ddgst": ${ddgst:-false} 00:09:03.443 }, 00:09:03.443 "method": "bdev_nvme_attach_controller" 00:09:03.443 } 00:09:03.443 EOF 00:09:03.443 )") 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:03.443 04:04:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.443 "params": { 00:09:03.443 "name": "Nvme0", 00:09:03.443 "trtype": "tcp", 00:09:03.443 "traddr": "10.0.0.2", 00:09:03.443 "adrfam": "ipv4", 00:09:03.443 "trsvcid": "4420", 00:09:03.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:03.443 "hdgst": false, 00:09:03.443 "ddgst": false 00:09:03.444 }, 00:09:03.444 "method": "bdev_nvme_attach_controller" 00:09:03.444 }' 00:09:03.444 [2024-07-23 04:04:56.660818] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:03.444 [2024-07-23 04:04:56.660940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78690 ] 00:09:03.444 [2024-07-23 04:04:56.783451] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:03.702 [2024-07-23 04:04:56.803874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.702 [2024-07-23 04:04:56.874159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.702 [2024-07-23 04:04:56.942808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.961 Running I/O for 10 seconds... 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.530 04:04:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:04.530 [2024-07-23 04:04:57.727998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.530 [2024-07-23 04:04:57.728450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.530 [2024-07-23 04:04:57.728459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.728990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.728999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.531 [2024-07-23 04:04:57.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.531 [2024-07-23 04:04:57.729414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.532 [2024-07-23 04:04:57.729654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1770990 is same with the state(5) to be set 00:09:04.532 [2024-07-23 04:04:57.729733] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1770990 was disconnected and freed. reset controller. 00:09:04.532 [2024-07-23 04:04:57.729852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.532 [2024-07-23 04:04:57.729871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.532 [2024-07-23 04:04:57.729907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.532 [2024-07-23 04:04:57.729932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.532 [2024-07-23 04:04:57.729952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.532 [2024-07-23 04:04:57.729962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d4e60 is same with the state(5) to be set 00:09:04.532 [2024-07-23 04:04:57.731101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:04.532 task offset: 122880 on job bdev=Nvme0n1 fails 00:09:04.532 00:09:04.532 Latency(us) 00:09:04.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.532 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:04.532 Job: Nvme0n1 ended in about 0.67 seconds with error 00:09:04.532 Verification LBA range: start 0x0 length 0x400 00:09:04.532 Nvme0n1 : 0.67 1438.99 89.94 95.93 0.00 40367.70 2457.60 46232.67 00:09:04.532 =================================================================================================================== 00:09:04.532 Total : 1438.99 89.94 95.93 0.00 40367.70 2457.60 46232.67 00:09:04.532 [2024-07-23 04:04:57.733171] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.532 [2024-07-23 04:04:57.733312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d4e60 (9): Bad file descriptor 00:09:04.532 [2024-07-23 04:04:57.741409] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 78690 00:09:05.468 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (78690) - No such process 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:05.468 { 00:09:05.468 "params": { 00:09:05.468 "name": "Nvme$subsystem", 00:09:05.468 "trtype": "$TEST_TRANSPORT", 00:09:05.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.468 "adrfam": "ipv4", 00:09:05.468 "trsvcid": "$NVMF_PORT", 00:09:05.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.468 "hdgst": ${hdgst:-false}, 00:09:05.468 "ddgst": ${ddgst:-false} 00:09:05.468 }, 00:09:05.468 "method": "bdev_nvme_attach_controller" 00:09:05.468 } 00:09:05.468 EOF 00:09:05.468 )") 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:05.468 04:04:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:05.468 "params": { 00:09:05.468 "name": "Nvme0", 00:09:05.468 "trtype": "tcp", 00:09:05.468 "traddr": "10.0.0.2", 00:09:05.468 "adrfam": "ipv4", 00:09:05.468 "trsvcid": "4420", 00:09:05.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:05.468 "hdgst": false, 00:09:05.468 "ddgst": false 00:09:05.468 }, 00:09:05.468 "method": "bdev_nvme_attach_controller" 00:09:05.468 }' 00:09:05.468 [2024-07-23 04:04:58.792881] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:05.468 [2024-07-23 04:04:58.793024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78728 ] 00:09:05.727 [2024-07-23 04:04:58.917814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:05.727 [2024-07-23 04:04:58.937439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.727 [2024-07-23 04:04:59.014193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.986 [2024-07-23 04:04:59.077884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.986 Running I/O for 1 seconds... 00:09:06.921 00:09:06.921 Latency(us) 00:09:06.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.921 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:06.921 Verification LBA range: start 0x0 length 0x400 00:09:06.921 Nvme0n1 : 1.04 1607.53 100.47 0.00 0.00 39079.11 4647.10 34555.35 00:09:06.921 =================================================================================================================== 00:09:06.921 Total : 1607.53 100.47 0.00 0.00 39079.11 4647.10 34555.35 00:09:07.179 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:07.179 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:07.179 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:07.179 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:07.180 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:07.180 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.180 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.467 rmmod nvme_tcp 00:09:07.467 rmmod nvme_fabrics 00:09:07.467 rmmod nvme_keyring 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 78629 ']' 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 78629 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 78629 ']' 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 78629 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78629 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:07.467 killing process with pid 78629 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78629' 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 78629 00:09:07.467 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 78629 00:09:07.726 [2024-07-23 04:05:00.856712] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:07.726 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:07.726 ************************************ 00:09:07.726 END TEST nvmf_host_management 00:09:07.726 ************************************ 00:09:07.726 00:09:07.726 real 0m5.953s 00:09:07.726 user 0m23.059s 00:09:07.727 sys 0m1.539s 00:09:07.727 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.727 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.727 04:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:07.727 04:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:07.727 04:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:07.727 04:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.727 04:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.727 ************************************ 00:09:07.727 START TEST nvmf_lvol 00:09:07.727 ************************************ 00:09:07.727 04:05:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:07.727 * Looking for test storage... 00:09:07.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.727 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:07.986 Cannot find device "nvmf_tgt_br" 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:07.986 Cannot find device "nvmf_tgt_br2" 00:09:07.986 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:07.987 Cannot find device "nvmf_tgt_br" 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:07.987 Cannot find device "nvmf_tgt_br2" 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:07.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:07.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:07.987 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:08.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:09:08.246 00:09:08.246 --- 10.0.0.2 ping statistics --- 00:09:08.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.246 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:08.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:08.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:09:08.246 00:09:08.246 --- 10.0.0.3 ping statistics --- 00:09:08.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.246 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:08.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:08.246 00:09:08.246 --- 10.0.0.1 ping statistics --- 00:09:08.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.246 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=78933 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 78933 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 78933 ']' 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.246 04:05:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:08.246 [2024-07-23 04:05:01.464054] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:08.247 [2024-07-23 04:05:01.464163] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.247 [2024-07-23 04:05:01.587927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.505 [2024-07-23 04:05:01.608058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:08.505 [2024-07-23 04:05:01.679099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.505 [2024-07-23 04:05:01.679361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.505 [2024-07-23 04:05:01.679458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.505 [2024-07-23 04:05:01.679563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.505 [2024-07-23 04:05:01.679647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.505 [2024-07-23 04:05:01.679927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.505 [2024-07-23 04:05:01.680160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.505 [2024-07-23 04:05:01.680193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.505 [2024-07-23 04:05:01.738143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:09.073 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.073 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:09.073 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.073 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.073 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.331 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.331 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.332 [2024-07-23 04:05:02.639402] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.590 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.849 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:09.849 04:05:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.114 04:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:10.114 04:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:10.378 04:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:10.636 04:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=56ddaac4-7e89-4480-8558-df21ea6feb3e 00:09:10.636 04:05:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 56ddaac4-7e89-4480-8558-df21ea6feb3e lvol 20 00:09:10.895 04:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e63ff042-570d-40e1-a0a0-925f05df46d9 00:09:10.895 04:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.153 04:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e63ff042-570d-40e1-a0a0-925f05df46d9 00:09:11.153 04:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.412 [2024-07-23 04:05:04.700229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.412 04:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.671 04:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=79009 00:09:11.671 04:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:11.671 04:05:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:13.046 04:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot e63ff042-570d-40e1-a0a0-925f05df46d9 MY_SNAPSHOT 00:09:13.046 04:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bf6fcb44-183b-47d9-9f74-349a33f9e846 00:09:13.046 04:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize e63ff042-570d-40e1-a0a0-925f05df46d9 30 00:09:13.305 04:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone bf6fcb44-183b-47d9-9f74-349a33f9e846 MY_CLONE 00:09:13.573 04:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5d5fb1a8-301d-468f-a985-66c7c8595506 00:09:13.573 04:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5d5fb1a8-301d-468f-a985-66c7c8595506 00:09:14.162 04:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 79009 00:09:22.323 Initializing NVMe Controllers 00:09:22.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:22.323 Controller IO queue size 128, less than required. 00:09:22.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:22.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:22.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:22.323 Initialization complete. Launching workers. 00:09:22.323 ======================================================== 00:09:22.323 Latency(us) 00:09:22.323 Device Information : IOPS MiB/s Average min max 00:09:22.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9196.68 35.92 13937.60 1178.80 122604.02 00:09:22.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8538.74 33.35 14998.25 2796.01 66619.04 00:09:22.323 ======================================================== 00:09:22.323 Total : 17735.43 69.28 14448.25 1178.80 122604.02 00:09:22.323 00:09:22.323 04:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.323 04:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e63ff042-570d-40e1-a0a0-925f05df46d9 00:09:22.583 04:05:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56ddaac4-7e89-4480-8558-df21ea6feb3e 00:09:22.842 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:22.842 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:22.842 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:22.842 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.842 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.101 rmmod nvme_tcp 00:09:23.101 rmmod nvme_fabrics 00:09:23.101 rmmod nvme_keyring 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 78933 ']' 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 78933 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 78933 ']' 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 78933 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78933 00:09:23.101 killing process with pid 78933 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78933' 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 78933 00:09:23.101 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 78933 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:23.670 ************************************ 00:09:23.670 END TEST nvmf_lvol 00:09:23.670 ************************************ 00:09:23.670 00:09:23.670 real 0m15.785s 00:09:23.670 user 1m5.545s 00:09:23.670 sys 0m4.160s 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.670 ************************************ 00:09:23.670 START TEST nvmf_lvs_grow 00:09:23.670 ************************************ 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:23.670 * Looking for test storage... 00:09:23.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.670 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:23.671 Cannot find device "nvmf_tgt_br" 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:23.671 Cannot find device "nvmf_tgt_br2" 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:23.671 Cannot find device "nvmf_tgt_br" 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:23.671 Cannot find device "nvmf_tgt_br2" 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:23.671 04:05:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:23.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:23.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:23.931 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:23.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:23.932 00:09:23.932 --- 10.0.0.2 ping statistics --- 00:09:23.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.932 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:23.932 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:23.932 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:23.932 00:09:23.932 --- 10.0.0.3 ping statistics --- 00:09:23.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.932 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:23.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:23.932 00:09:23.932 --- 10.0.0.1 ping statistics --- 00:09:23.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.932 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.932 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=79342 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 79342 00:09:24.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 79342 ']' 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.191 04:05:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.191 [2024-07-23 04:05:17.337101] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:24.191 [2024-07-23 04:05:17.337461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.191 [2024-07-23 04:05:17.463714] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:24.191 [2024-07-23 04:05:17.482708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.449 [2024-07-23 04:05:17.594380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.449 [2024-07-23 04:05:17.594765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.449 [2024-07-23 04:05:17.594796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.449 [2024-07-23 04:05:17.594806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.449 [2024-07-23 04:05:17.594830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.449 [2024-07-23 04:05:17.594875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.449 [2024-07-23 04:05:17.680919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.017 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.017 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:25.017 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.017 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.017 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.275 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.275 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.275 [2024-07-23 04:05:18.600959] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.534 ************************************ 00:09:25.534 START TEST lvs_grow_clean 00:09:25.534 ************************************ 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:25.534 04:05:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.793 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:25.793 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:26.052 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:26.052 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:26.052 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.310 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:26.310 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:26.310 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 lvol 150 00:09:26.569 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1b65c073-7d89-467e-ae0c-9f147ac9d963 00:09:26.569 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:26.569 04:05:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:26.827 [2024-07-23 04:05:20.006982] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:26.827 [2024-07-23 04:05:20.008998] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:26.827 true 00:09:26.827 04:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:26.828 04:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:27.114 04:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:27.114 04:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:27.375 04:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b65c073-7d89-467e-ae0c-9f147ac9d963 00:09:27.634 04:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:27.634 [2024-07-23 04:05:20.968047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.893 04:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79430 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:27.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79430 /var/tmp/bdevperf.sock 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 79430 ']' 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.893 04:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:28.151 [2024-07-23 04:05:21.267028] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:28.151 [2024-07-23 04:05:21.267512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79430 ] 00:09:28.151 [2024-07-23 04:05:21.386683] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:28.151 [2024-07-23 04:05:21.405518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.151 [2024-07-23 04:05:21.488596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.411 [2024-07-23 04:05:21.547854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.977 04:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.977 04:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:28.977 04:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.236 Nvme0n1 00:09:29.236 04:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:29.495 [ 00:09:29.495 { 00:09:29.495 "name": "Nvme0n1", 00:09:29.495 "aliases": [ 00:09:29.495 "1b65c073-7d89-467e-ae0c-9f147ac9d963" 00:09:29.495 ], 00:09:29.495 "product_name": "NVMe disk", 00:09:29.495 "block_size": 4096, 00:09:29.495 "num_blocks": 38912, 00:09:29.495 "uuid": "1b65c073-7d89-467e-ae0c-9f147ac9d963", 00:09:29.495 "assigned_rate_limits": { 00:09:29.495 "rw_ios_per_sec": 0, 00:09:29.495 "rw_mbytes_per_sec": 0, 00:09:29.495 "r_mbytes_per_sec": 0, 00:09:29.495 "w_mbytes_per_sec": 0 00:09:29.495 }, 00:09:29.495 "claimed": false, 00:09:29.495 "zoned": false, 00:09:29.495 "supported_io_types": { 00:09:29.495 "read": true, 00:09:29.495 "write": true, 00:09:29.495 "unmap": true, 00:09:29.495 "flush": true, 00:09:29.495 "reset": true, 00:09:29.495 "nvme_admin": true, 00:09:29.495 "nvme_io": true, 00:09:29.495 "nvme_io_md": false, 00:09:29.495 "write_zeroes": true, 00:09:29.495 "zcopy": false, 00:09:29.495 "get_zone_info": false, 00:09:29.495 "zone_management": false, 00:09:29.495 "zone_append": false, 00:09:29.495 "compare": true, 00:09:29.495 "compare_and_write": true, 00:09:29.495 "abort": true, 00:09:29.495 "seek_hole": false, 00:09:29.495 "seek_data": false, 00:09:29.495 "copy": true, 00:09:29.495 "nvme_iov_md": false 00:09:29.495 }, 00:09:29.495 "memory_domains": [ 00:09:29.495 { 00:09:29.495 "dma_device_id": "system", 00:09:29.495 "dma_device_type": 1 00:09:29.495 } 00:09:29.495 ], 00:09:29.495 "driver_specific": { 00:09:29.495 "nvme": [ 00:09:29.495 { 00:09:29.495 "trid": { 00:09:29.495 "trtype": "TCP", 00:09:29.495 "adrfam": "IPv4", 00:09:29.495 "traddr": "10.0.0.2", 00:09:29.495 "trsvcid": "4420", 00:09:29.495 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:29.495 }, 00:09:29.495 "ctrlr_data": { 00:09:29.495 "cntlid": 1, 00:09:29.495 "vendor_id": "0x8086", 00:09:29.495 "model_number": "SPDK bdev Controller", 00:09:29.495 "serial_number": "SPDK0", 00:09:29.495 "firmware_revision": "24.09", 00:09:29.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.495 "oacs": { 00:09:29.495 "security": 0, 00:09:29.495 "format": 0, 00:09:29.495 "firmware": 0, 00:09:29.495 "ns_manage": 0 00:09:29.495 }, 00:09:29.495 "multi_ctrlr": true, 00:09:29.495 "ana_reporting": false 00:09:29.495 }, 00:09:29.495 "vs": { 00:09:29.495 "nvme_version": "1.3" 00:09:29.495 }, 00:09:29.495 "ns_data": { 00:09:29.495 "id": 1, 00:09:29.495 "can_share": true 00:09:29.495 } 00:09:29.495 } 00:09:29.495 ], 00:09:29.495 "mp_policy": "active_passive" 00:09:29.495 } 00:09:29.495 } 00:09:29.495 ] 00:09:29.495 04:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79448 00:09:29.495 04:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.495 04:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:29.495 Running I/O for 10 seconds... 00:09:30.870 Latency(us) 00:09:30.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.870 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:09:30.870 =================================================================================================================== 00:09:30.870 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:09:30.870 00:09:31.437 04:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:31.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.695 Nvme0n1 : 2.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:09:31.695 =================================================================================================================== 00:09:31.695 Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:09:31.695 00:09:31.695 true 00:09:31.695 04:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:31.695 04:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.262 04:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.262 04:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.262 04:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 79448 00:09:32.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.521 Nvme0n1 : 3.00 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:09:32.521 =================================================================================================================== 00:09:32.521 Total : 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:09:32.521 00:09:33.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.898 Nvme0n1 : 4.00 6953.25 27.16 0.00 0.00 0.00 0.00 0.00 00:09:33.898 =================================================================================================================== 00:09:33.898 Total : 6953.25 27.16 0.00 0.00 0.00 0.00 0.00 00:09:33.898 00:09:34.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.835 Nvme0n1 : 5.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:34.835 =================================================================================================================== 00:09:34.835 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:34.835 00:09:35.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.770 Nvme0n1 : 6.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:35.770 =================================================================================================================== 00:09:35.770 Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:35.770 00:09:36.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.704 Nvme0n1 : 7.00 6803.57 26.58 0.00 0.00 0.00 0.00 0.00 00:09:36.704 =================================================================================================================== 00:09:36.704 Total : 6803.57 26.58 0.00 0.00 0.00 0.00 0.00 00:09:36.704 00:09:37.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.638 Nvme0n1 : 8.00 6778.62 26.48 0.00 0.00 0.00 0.00 0.00 00:09:37.638 =================================================================================================================== 00:09:37.638 Total : 6778.62 26.48 0.00 0.00 0.00 0.00 0.00 00:09:37.638 00:09:38.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.596 Nvme0n1 : 9.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:09:38.596 =================================================================================================================== 00:09:38.596 Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:09:38.596 00:09:39.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.531 Nvme0n1 : 10.00 6769.10 26.44 0.00 0.00 0.00 0.00 0.00 00:09:39.531 =================================================================================================================== 00:09:39.531 Total : 6769.10 26.44 0.00 0.00 0.00 0.00 0.00 00:09:39.531 00:09:39.531 00:09:39.531 Latency(us) 00:09:39.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.531 Nvme0n1 : 10.02 6770.56 26.45 0.00 0.00 18900.59 14358.34 41228.10 00:09:39.531 =================================================================================================================== 00:09:39.531 Total : 6770.56 26.45 0.00 0.00 18900.59 14358.34 41228.10 00:09:39.531 0 00:09:39.531 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79430 00:09:39.531 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 79430 ']' 00:09:39.531 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 79430 00:09:39.789 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:39.789 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:39.789 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79430 00:09:39.789 killing process with pid 79430 00:09:39.789 Received shutdown signal, test time was about 10.000000 seconds 00:09:39.789 00:09:39.789 Latency(us) 00:09:39.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.789 =================================================================================================================== 00:09:39.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:39.789 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:39.789 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:39.789 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79430' 00:09:39.789 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 79430 00:09:39.789 04:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 79430 00:09:39.789 04:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.355 04:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.355 04:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:40.355 04:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:40.613 04:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:40.613 04:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:40.613 04:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:40.871 [2024-07-23 04:05:34.103679] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:40.871 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:41.130 request: 00:09:41.130 { 00:09:41.130 "uuid": "1ec47194-baa3-4d46-9cc2-19acd3b048f5", 00:09:41.130 "method": "bdev_lvol_get_lvstores", 00:09:41.130 "req_id": 1 00:09:41.130 } 00:09:41.130 Got JSON-RPC error response 00:09:41.130 response: 00:09:41.130 { 00:09:41.130 "code": -19, 00:09:41.130 "message": "No such device" 00:09:41.130 } 00:09:41.130 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:41.130 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.130 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.130 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.130 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.388 aio_bdev 00:09:41.388 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1b65c073-7d89-467e-ae0c-9f147ac9d963 00:09:41.388 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=1b65c073-7d89-467e-ae0c-9f147ac9d963 00:09:41.388 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:41.388 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:41.388 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:41.388 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:41.388 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.647 04:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b65c073-7d89-467e-ae0c-9f147ac9d963 -t 2000 00:09:41.905 [ 00:09:41.905 { 00:09:41.905 "name": "1b65c073-7d89-467e-ae0c-9f147ac9d963", 00:09:41.905 "aliases": [ 00:09:41.905 "lvs/lvol" 00:09:41.905 ], 00:09:41.905 "product_name": "Logical Volume", 00:09:41.905 "block_size": 4096, 00:09:41.905 "num_blocks": 38912, 00:09:41.905 "uuid": "1b65c073-7d89-467e-ae0c-9f147ac9d963", 00:09:41.905 "assigned_rate_limits": { 00:09:41.905 "rw_ios_per_sec": 0, 00:09:41.906 "rw_mbytes_per_sec": 0, 00:09:41.906 "r_mbytes_per_sec": 0, 00:09:41.906 "w_mbytes_per_sec": 0 00:09:41.906 }, 00:09:41.906 "claimed": false, 00:09:41.906 "zoned": false, 00:09:41.906 "supported_io_types": { 00:09:41.906 "read": true, 00:09:41.906 "write": true, 00:09:41.906 "unmap": true, 00:09:41.906 "flush": false, 00:09:41.906 "reset": true, 00:09:41.906 "nvme_admin": false, 00:09:41.906 "nvme_io": false, 00:09:41.906 "nvme_io_md": false, 00:09:41.906 "write_zeroes": true, 00:09:41.906 "zcopy": false, 00:09:41.906 "get_zone_info": false, 00:09:41.906 "zone_management": false, 00:09:41.906 "zone_append": false, 00:09:41.906 "compare": false, 00:09:41.906 "compare_and_write": false, 00:09:41.906 "abort": false, 00:09:41.906 "seek_hole": true, 00:09:41.906 "seek_data": true, 00:09:41.906 "copy": false, 00:09:41.906 "nvme_iov_md": false 00:09:41.906 }, 00:09:41.906 "driver_specific": { 00:09:41.906 "lvol": { 00:09:41.906 "lvol_store_uuid": "1ec47194-baa3-4d46-9cc2-19acd3b048f5", 00:09:41.906 "base_bdev": "aio_bdev", 00:09:41.906 "thin_provision": false, 00:09:41.906 "num_allocated_clusters": 38, 00:09:41.906 "snapshot": false, 00:09:41.906 "clone": false, 00:09:41.906 "esnap_clone": false 00:09:41.906 } 00:09:41.906 } 00:09:41.906 } 00:09:41.906 ] 00:09:41.906 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:41.906 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:41.906 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:42.164 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:42.164 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:42.164 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:42.422 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:42.422 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1b65c073-7d89-467e-ae0c-9f147ac9d963 00:09:42.681 04:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ec47194-baa3-4d46-9cc2-19acd3b048f5 00:09:42.940 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.198 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.458 ************************************ 00:09:43.458 END TEST lvs_grow_clean 00:09:43.458 ************************************ 00:09:43.458 00:09:43.458 real 0m18.007s 00:09:43.458 user 0m16.842s 00:09:43.458 sys 0m2.530s 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.458 ************************************ 00:09:43.458 START TEST lvs_grow_dirty 00:09:43.458 ************************************ 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.458 04:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.717 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:43.717 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:43.977 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=115ea942-9c84-4c47-b4b5-dc4fef71d534 00:09:43.977 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:09:43.977 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:44.235 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:44.235 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:44.235 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 lvol 150 00:09:44.494 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e15a995a-5420-4237-b853-c98d5981f377 00:09:44.494 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.752 04:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:44.752 [2024-07-23 04:05:38.059936] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:44.752 [2024-07-23 04:05:38.060022] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:44.752 true 00:09:44.752 04:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:09:44.752 04:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:45.010 04:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:45.010 04:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:45.268 04:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e15a995a-5420-4237-b853-c98d5981f377 00:09:45.526 04:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:45.783 [2024-07-23 04:05:39.048796] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.783 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79693 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79693 /var/tmp/bdevperf.sock 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 79693 ']' 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.041 04:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.041 [2024-07-23 04:05:39.345720] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:46.042 [2024-07-23 04:05:39.346467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79693 ] 00:09:46.299 [2024-07-23 04:05:39.472792] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:46.299 [2024-07-23 04:05:39.484336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.299 [2024-07-23 04:05:39.563330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.299 [2024-07-23 04:05:39.622499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:47.232 04:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.232 04:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:47.232 04:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:47.490 Nvme0n1 00:09:47.490 04:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:47.749 [ 00:09:47.749 { 00:09:47.749 "name": "Nvme0n1", 00:09:47.749 "aliases": [ 00:09:47.749 "e15a995a-5420-4237-b853-c98d5981f377" 00:09:47.749 ], 00:09:47.749 "product_name": "NVMe disk", 00:09:47.749 "block_size": 4096, 00:09:47.749 "num_blocks": 38912, 00:09:47.749 "uuid": "e15a995a-5420-4237-b853-c98d5981f377", 00:09:47.749 "assigned_rate_limits": { 00:09:47.749 "rw_ios_per_sec": 0, 00:09:47.749 "rw_mbytes_per_sec": 0, 00:09:47.749 "r_mbytes_per_sec": 0, 00:09:47.749 "w_mbytes_per_sec": 0 00:09:47.749 }, 00:09:47.749 "claimed": false, 00:09:47.749 "zoned": false, 00:09:47.749 "supported_io_types": { 00:09:47.749 "read": true, 00:09:47.749 "write": true, 00:09:47.749 "unmap": true, 00:09:47.749 "flush": true, 00:09:47.749 "reset": true, 00:09:47.749 "nvme_admin": true, 00:09:47.749 "nvme_io": true, 00:09:47.749 "nvme_io_md": false, 00:09:47.749 "write_zeroes": true, 00:09:47.749 "zcopy": false, 00:09:47.749 "get_zone_info": false, 00:09:47.749 "zone_management": false, 00:09:47.749 "zone_append": false, 00:09:47.749 "compare": true, 00:09:47.749 "compare_and_write": true, 00:09:47.749 "abort": true, 00:09:47.749 "seek_hole": false, 00:09:47.749 "seek_data": false, 00:09:47.749 "copy": true, 00:09:47.749 "nvme_iov_md": false 00:09:47.749 }, 00:09:47.749 "memory_domains": [ 00:09:47.749 { 00:09:47.749 "dma_device_id": "system", 00:09:47.749 "dma_device_type": 1 00:09:47.749 } 00:09:47.749 ], 00:09:47.749 "driver_specific": { 00:09:47.749 "nvme": [ 00:09:47.749 { 00:09:47.749 "trid": { 00:09:47.749 "trtype": "TCP", 00:09:47.749 "adrfam": "IPv4", 00:09:47.749 "traddr": "10.0.0.2", 00:09:47.749 "trsvcid": "4420", 00:09:47.749 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:47.749 }, 00:09:47.749 "ctrlr_data": { 00:09:47.749 "cntlid": 1, 00:09:47.749 "vendor_id": "0x8086", 00:09:47.749 "model_number": "SPDK bdev Controller", 00:09:47.749 "serial_number": "SPDK0", 00:09:47.749 "firmware_revision": "24.09", 00:09:47.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.749 "oacs": { 00:09:47.749 "security": 0, 00:09:47.749 "format": 0, 00:09:47.749 "firmware": 0, 00:09:47.749 "ns_manage": 0 00:09:47.749 }, 00:09:47.749 "multi_ctrlr": true, 00:09:47.749 "ana_reporting": false 00:09:47.749 }, 00:09:47.749 "vs": { 00:09:47.749 "nvme_version": "1.3" 00:09:47.749 }, 00:09:47.749 "ns_data": { 00:09:47.749 "id": 1, 00:09:47.749 "can_share": true 00:09:47.749 } 00:09:47.749 } 00:09:47.749 ], 00:09:47.749 "mp_policy": "active_passive" 00:09:47.749 } 00:09:47.749 } 00:09:47.749 ] 00:09:47.749 04:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79718 00:09:47.749 04:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.749 04:05:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:47.749 Running I/O for 10 seconds... 00:09:49.125 Latency(us) 00:09:49.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.125 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:49.125 =================================================================================================================== 00:09:49.125 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:49.125 00:09:49.693 04:05:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:09:49.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.951 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:49.951 =================================================================================================================== 00:09:49.951 Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:49.951 00:09:49.951 true 00:09:49.951 04:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:09:49.951 04:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:50.515 04:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:50.516 04:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:50.516 04:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 79718 00:09:50.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.773 Nvme0n1 : 3.00 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:50.773 =================================================================================================================== 00:09:50.773 Total : 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:50.773 00:09:52.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.147 Nvme0n1 : 4.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:52.147 =================================================================================================================== 00:09:52.147 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:52.147 00:09:53.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.082 Nvme0n1 : 5.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:53.082 =================================================================================================================== 00:09:53.082 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:53.082 00:09:54.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.018 Nvme0n1 : 6.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:54.018 =================================================================================================================== 00:09:54.018 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:54.018 00:09:54.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.954 Nvme0n1 : 7.00 6743.14 26.34 0.00 0.00 0.00 0.00 0.00 00:09:54.954 =================================================================================================================== 00:09:54.954 Total : 6743.14 26.34 0.00 0.00 0.00 0.00 0.00 00:09:54.954 00:09:55.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.890 Nvme0n1 : 8.00 6757.50 26.40 0.00 0.00 0.00 0.00 0.00 00:09:55.890 =================================================================================================================== 00:09:55.890 Total : 6757.50 26.40 0.00 0.00 0.00 0.00 0.00 00:09:55.890 00:09:56.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.825 Nvme0n1 : 9.00 6754.56 26.38 0.00 0.00 0.00 0.00 0.00 00:09:56.825 =================================================================================================================== 00:09:56.825 Total : 6754.56 26.38 0.00 0.00 0.00 0.00 0.00 00:09:56.825 00:09:57.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.760 Nvme0n1 : 10.00 6752.20 26.38 0.00 0.00 0.00 0.00 0.00 00:09:57.760 =================================================================================================================== 00:09:57.760 Total : 6752.20 26.38 0.00 0.00 0.00 0.00 0.00 00:09:57.760 00:09:57.760 00:09:57.760 Latency(us) 00:09:57.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.760 Nvme0n1 : 10.01 6759.31 26.40 0.00 0.00 18931.82 11141.12 140127.88 00:09:57.760 =================================================================================================================== 00:09:57.760 Total : 6759.31 26.40 0.00 0.00 18931.82 11141.12 140127.88 00:09:57.760 0 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79693 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 79693 ']' 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 79693 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79693 00:09:58.018 killing process with pid 79693 00:09:58.018 Received shutdown signal, test time was about 10.000000 seconds 00:09:58.018 00:09:58.018 Latency(us) 00:09:58.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.018 =================================================================================================================== 00:09:58.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79693' 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 79693 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 79693 00:09:58.018 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.276 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.534 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:09:58.534 04:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:58.792 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:58.792 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:58.792 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 79342 00:09:58.792 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 79342 00:09:59.051 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 79342 Killed "${NVMF_APP[@]}" "$@" 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=79851 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 79851 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 79851 ']' 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.051 04:05:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:59.051 [2024-07-23 04:05:52.208506] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:09:59.051 [2024-07-23 04:05:52.208613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.051 [2024-07-23 04:05:52.334682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:59.051 [2024-07-23 04:05:52.351262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.309 [2024-07-23 04:05:52.404300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.309 [2024-07-23 04:05:52.404359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.309 [2024-07-23 04:05:52.404385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.309 [2024-07-23 04:05:52.404392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.309 [2024-07-23 04:05:52.404398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.309 [2024-07-23 04:05:52.404431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.309 [2024-07-23 04:05:52.453587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:59.875 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.875 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:59.875 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.875 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.875 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:59.875 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.875 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.134 [2024-07-23 04:05:53.355447] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:00.134 [2024-07-23 04:05:53.355755] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:00.134 [2024-07-23 04:05:53.356015] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:00.134 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:00.134 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e15a995a-5420-4237-b853-c98d5981f377 00:10:00.134 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e15a995a-5420-4237-b853-c98d5981f377 00:10:00.134 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:00.134 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:00.134 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:00.134 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:00.134 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:00.392 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e15a995a-5420-4237-b853-c98d5981f377 -t 2000 00:10:00.650 [ 00:10:00.650 { 00:10:00.650 "name": "e15a995a-5420-4237-b853-c98d5981f377", 00:10:00.650 "aliases": [ 00:10:00.650 "lvs/lvol" 00:10:00.650 ], 00:10:00.650 "product_name": "Logical Volume", 00:10:00.650 "block_size": 4096, 00:10:00.650 "num_blocks": 38912, 00:10:00.650 "uuid": "e15a995a-5420-4237-b853-c98d5981f377", 00:10:00.650 "assigned_rate_limits": { 00:10:00.650 "rw_ios_per_sec": 0, 00:10:00.650 "rw_mbytes_per_sec": 0, 00:10:00.650 "r_mbytes_per_sec": 0, 00:10:00.650 "w_mbytes_per_sec": 0 00:10:00.650 }, 00:10:00.650 "claimed": false, 00:10:00.650 "zoned": false, 00:10:00.650 "supported_io_types": { 00:10:00.650 "read": true, 00:10:00.650 "write": true, 00:10:00.650 "unmap": true, 00:10:00.650 "flush": false, 00:10:00.650 "reset": true, 00:10:00.650 "nvme_admin": false, 00:10:00.650 "nvme_io": false, 00:10:00.650 "nvme_io_md": false, 00:10:00.650 "write_zeroes": true, 00:10:00.650 "zcopy": false, 00:10:00.650 "get_zone_info": false, 00:10:00.650 "zone_management": false, 00:10:00.650 "zone_append": false, 00:10:00.650 "compare": false, 00:10:00.650 "compare_and_write": false, 00:10:00.650 "abort": false, 00:10:00.650 "seek_hole": true, 00:10:00.650 "seek_data": true, 00:10:00.650 "copy": false, 00:10:00.650 "nvme_iov_md": false 00:10:00.650 }, 00:10:00.650 "driver_specific": { 00:10:00.650 "lvol": { 00:10:00.650 "lvol_store_uuid": "115ea942-9c84-4c47-b4b5-dc4fef71d534", 00:10:00.650 "base_bdev": "aio_bdev", 00:10:00.650 "thin_provision": false, 00:10:00.650 "num_allocated_clusters": 38, 00:10:00.650 "snapshot": false, 00:10:00.650 "clone": false, 00:10:00.650 "esnap_clone": false 00:10:00.650 } 00:10:00.650 } 00:10:00.650 } 00:10:00.650 ] 00:10:00.650 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:00.650 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:10:00.650 04:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:00.908 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:00.908 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:10:00.908 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:01.166 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:01.166 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:01.166 [2024-07-23 04:05:54.488909] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:01.425 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:10:01.684 request: 00:10:01.684 { 00:10:01.684 "uuid": "115ea942-9c84-4c47-b4b5-dc4fef71d534", 00:10:01.684 "method": "bdev_lvol_get_lvstores", 00:10:01.684 "req_id": 1 00:10:01.684 } 00:10:01.684 Got JSON-RPC error response 00:10:01.684 response: 00:10:01.684 { 00:10:01.684 "code": -19, 00:10:01.684 "message": "No such device" 00:10:01.684 } 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:01.684 aio_bdev 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e15a995a-5420-4237-b853-c98d5981f377 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=e15a995a-5420-4237-b853-c98d5981f377 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:01.684 04:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:01.942 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e15a995a-5420-4237-b853-c98d5981f377 -t 2000 00:10:02.200 [ 00:10:02.200 { 00:10:02.200 "name": "e15a995a-5420-4237-b853-c98d5981f377", 00:10:02.200 "aliases": [ 00:10:02.200 "lvs/lvol" 00:10:02.200 ], 00:10:02.200 "product_name": "Logical Volume", 00:10:02.200 "block_size": 4096, 00:10:02.200 "num_blocks": 38912, 00:10:02.200 "uuid": "e15a995a-5420-4237-b853-c98d5981f377", 00:10:02.200 "assigned_rate_limits": { 00:10:02.200 "rw_ios_per_sec": 0, 00:10:02.200 "rw_mbytes_per_sec": 0, 00:10:02.200 "r_mbytes_per_sec": 0, 00:10:02.200 "w_mbytes_per_sec": 0 00:10:02.200 }, 00:10:02.200 "claimed": false, 00:10:02.200 "zoned": false, 00:10:02.200 "supported_io_types": { 00:10:02.200 "read": true, 00:10:02.200 "write": true, 00:10:02.200 "unmap": true, 00:10:02.200 "flush": false, 00:10:02.200 "reset": true, 00:10:02.200 "nvme_admin": false, 00:10:02.200 "nvme_io": false, 00:10:02.200 "nvme_io_md": false, 00:10:02.200 "write_zeroes": true, 00:10:02.200 "zcopy": false, 00:10:02.200 "get_zone_info": false, 00:10:02.200 "zone_management": false, 00:10:02.200 "zone_append": false, 00:10:02.200 "compare": false, 00:10:02.200 "compare_and_write": false, 00:10:02.200 "abort": false, 00:10:02.200 "seek_hole": true, 00:10:02.200 "seek_data": true, 00:10:02.200 "copy": false, 00:10:02.200 "nvme_iov_md": false 00:10:02.200 }, 00:10:02.200 "driver_specific": { 00:10:02.200 "lvol": { 00:10:02.200 "lvol_store_uuid": "115ea942-9c84-4c47-b4b5-dc4fef71d534", 00:10:02.200 "base_bdev": "aio_bdev", 00:10:02.201 "thin_provision": false, 00:10:02.201 "num_allocated_clusters": 38, 00:10:02.201 "snapshot": false, 00:10:02.201 "clone": false, 00:10:02.201 "esnap_clone": false 00:10:02.201 } 00:10:02.201 } 00:10:02.201 } 00:10:02.201 ] 00:10:02.201 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:02.201 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:10:02.201 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:02.458 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:02.458 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:10:02.458 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:02.458 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:02.458 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e15a995a-5420-4237-b853-c98d5981f377 00:10:02.717 04:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 115ea942-9c84-4c47-b4b5-dc4fef71d534 00:10:02.976 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:03.234 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.492 00:10:03.492 real 0m20.043s 00:10:03.492 user 0m41.025s 00:10:03.492 sys 0m9.802s 00:10:03.492 ************************************ 00:10:03.492 END TEST lvs_grow_dirty 00:10:03.492 ************************************ 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:03.492 nvmf_trace.0 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.492 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:03.751 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.751 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:03.751 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.751 04:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.751 rmmod nvme_tcp 00:10:03.751 rmmod nvme_fabrics 00:10:03.751 rmmod nvme_keyring 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 79851 ']' 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 79851 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 79851 ']' 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 79851 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79851 00:10:03.751 killing process with pid 79851 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79851' 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 79851 00:10:03.751 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 79851 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:04.009 ************************************ 00:10:04.009 END TEST nvmf_lvs_grow 00:10:04.009 ************************************ 00:10:04.009 00:10:04.009 real 0m40.506s 00:10:04.009 user 1m3.563s 00:10:04.009 sys 0m13.058s 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.009 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.268 ************************************ 00:10:04.268 START TEST nvmf_bdev_io_wait 00:10:04.268 ************************************ 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.268 * Looking for test storage... 00:10:04.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.268 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:04.269 Cannot find device "nvmf_tgt_br" 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.269 Cannot find device "nvmf_tgt_br2" 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:04.269 Cannot find device "nvmf_tgt_br" 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:04.269 Cannot find device "nvmf_tgt_br2" 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:04.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:04.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:04.269 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:04.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:10:04.528 00:10:04.528 --- 10.0.0.2 ping statistics --- 00:10:04.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.528 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:04.528 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.528 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:04.528 00:10:04.528 --- 10.0.0.3 ping statistics --- 00:10:04.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.528 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:04.528 00:10:04.528 --- 10.0.0.1 ping statistics --- 00:10:04.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.528 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=80168 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 80168 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 80168 ']' 00:10:04.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.528 04:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:04.787 [2024-07-23 04:05:57.900958] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:04.787 [2024-07-23 04:05:57.901656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.787 [2024-07-23 04:05:58.028856] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:04.787 [2024-07-23 04:05:58.044397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.787 [2024-07-23 04:05:58.103160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.787 [2024-07-23 04:05:58.103543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.787 [2024-07-23 04:05:58.103708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.787 [2024-07-23 04:05:58.103759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.787 [2024-07-23 04:05:58.103854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.787 [2024-07-23 04:05:58.104058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.787 [2024-07-23 04:05:58.104405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.787 [2024-07-23 04:05:58.104410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.787 [2024-07-23 04:05:58.104875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.738 04:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.738 [2024-07-23 04:05:59.001009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.738 [2024-07-23 04:05:59.017486] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.738 Malloc0 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.738 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.006 [2024-07-23 04:05:59.081349] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=80203 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=80205 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:06.006 { 00:10:06.006 "params": { 00:10:06.006 "name": "Nvme$subsystem", 00:10:06.006 "trtype": "$TEST_TRANSPORT", 00:10:06.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.006 "adrfam": "ipv4", 00:10:06.006 "trsvcid": "$NVMF_PORT", 00:10:06.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.006 "hdgst": ${hdgst:-false}, 00:10:06.006 "ddgst": ${ddgst:-false} 00:10:06.006 }, 00:10:06.006 "method": "bdev_nvme_attach_controller" 00:10:06.006 } 00:10:06.006 EOF 00:10:06.006 )") 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=80208 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:06.006 { 00:10:06.006 "params": { 00:10:06.006 "name": "Nvme$subsystem", 00:10:06.006 "trtype": "$TEST_TRANSPORT", 00:10:06.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.006 "adrfam": "ipv4", 00:10:06.006 "trsvcid": "$NVMF_PORT", 00:10:06.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.006 "hdgst": ${hdgst:-false}, 00:10:06.006 "ddgst": ${ddgst:-false} 00:10:06.006 }, 00:10:06.006 "method": "bdev_nvme_attach_controller" 00:10:06.006 } 00:10:06.006 EOF 00:10:06.006 )") 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:06.006 { 00:10:06.006 "params": { 00:10:06.006 "name": "Nvme$subsystem", 00:10:06.006 "trtype": "$TEST_TRANSPORT", 00:10:06.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.006 "adrfam": "ipv4", 00:10:06.006 "trsvcid": "$NVMF_PORT", 00:10:06.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.006 "hdgst": ${hdgst:-false}, 00:10:06.006 "ddgst": ${ddgst:-false} 00:10:06.006 }, 00:10:06.006 "method": "bdev_nvme_attach_controller" 00:10:06.006 } 00:10:06.006 EOF 00:10:06.006 )") 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:06.006 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=80210 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:06.007 "params": { 00:10:06.007 "name": "Nvme1", 00:10:06.007 "trtype": "tcp", 00:10:06.007 "traddr": "10.0.0.2", 00:10:06.007 "adrfam": "ipv4", 00:10:06.007 "trsvcid": "4420", 00:10:06.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.007 "hdgst": false, 00:10:06.007 "ddgst": false 00:10:06.007 }, 00:10:06.007 "method": "bdev_nvme_attach_controller" 00:10:06.007 }' 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:06.007 { 00:10:06.007 "params": { 00:10:06.007 "name": "Nvme$subsystem", 00:10:06.007 "trtype": "$TEST_TRANSPORT", 00:10:06.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.007 "adrfam": "ipv4", 00:10:06.007 "trsvcid": "$NVMF_PORT", 00:10:06.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.007 "hdgst": ${hdgst:-false}, 00:10:06.007 "ddgst": ${ddgst:-false} 00:10:06.007 }, 00:10:06.007 "method": "bdev_nvme_attach_controller" 00:10:06.007 } 00:10:06.007 EOF 00:10:06.007 )") 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:06.007 "params": { 00:10:06.007 "name": "Nvme1", 00:10:06.007 "trtype": "tcp", 00:10:06.007 "traddr": "10.0.0.2", 00:10:06.007 "adrfam": "ipv4", 00:10:06.007 "trsvcid": "4420", 00:10:06.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.007 "hdgst": false, 00:10:06.007 "ddgst": false 00:10:06.007 }, 00:10:06.007 "method": "bdev_nvme_attach_controller" 00:10:06.007 }' 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:06.007 "params": { 00:10:06.007 "name": "Nvme1", 00:10:06.007 "trtype": "tcp", 00:10:06.007 "traddr": "10.0.0.2", 00:10:06.007 "adrfam": "ipv4", 00:10:06.007 "trsvcid": "4420", 00:10:06.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.007 "hdgst": false, 00:10:06.007 "ddgst": false 00:10:06.007 }, 00:10:06.007 "method": "bdev_nvme_attach_controller" 00:10:06.007 }' 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:06.007 "params": { 00:10:06.007 "name": "Nvme1", 00:10:06.007 "trtype": "tcp", 00:10:06.007 "traddr": "10.0.0.2", 00:10:06.007 "adrfam": "ipv4", 00:10:06.007 "trsvcid": "4420", 00:10:06.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.007 "hdgst": false, 00:10:06.007 "ddgst": false 00:10:06.007 }, 00:10:06.007 "method": "bdev_nvme_attach_controller" 00:10:06.007 }' 00:10:06.007 [2024-07-23 04:05:59.144288] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:06.007 [2024-07-23 04:05:59.145466] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:06.007 [2024-07-23 04:05:59.145834] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:06.007 [2024-07-23 04:05:59.145926] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:06.007 04:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 80203 00:10:06.007 [2024-07-23 04:05:59.165698] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:06.007 [2024-07-23 04:05:59.165878] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:06.007 [2024-07-23 04:05:59.166816] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:06.007 [2024-07-23 04:05:59.167752] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:06.007 [2024-07-23 04:05:59.338395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:06.265 [2024-07-23 04:05:59.358156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.265 [2024-07-23 04:05:59.413118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:06.265 [2024-07-23 04:05:59.433839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:06.265 [2024-07-23 04:05:59.436244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.265 [2024-07-23 04:05:59.481157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:06.265 [2024-07-23 04:05:59.495702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.265 [2024-07-23 04:05:59.508324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:06.265 [2024-07-23 04:05:59.508504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.265 [2024-07-23 04:05:59.555435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.265 [2024-07-23 04:05:59.561848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:06.265 [2024-07-23 04:05:59.569265] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:06.265 [2024-07-23 04:05:59.590786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.265 [2024-07-23 04:05:59.606358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.522 Running I/O for 1 seconds... 00:10:06.522 Running I/O for 1 seconds... 00:10:06.522 [2024-07-23 04:05:59.662065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:06.522 Running I/O for 1 seconds... 00:10:06.522 [2024-07-23 04:05:59.710264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.522 Running I/O for 1 seconds... 00:10:07.457 00:10:07.457 Latency(us) 00:10:07.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.457 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:07.457 Nvme1n1 : 1.01 9689.10 37.85 0.00 0.00 13145.23 8936.73 21567.30 00:10:07.457 =================================================================================================================== 00:10:07.457 Total : 9689.10 37.85 0.00 0.00 13145.23 8936.73 21567.30 00:10:07.457 00:10:07.457 Latency(us) 00:10:07.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.457 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:07.457 Nvme1n1 : 1.01 8763.26 34.23 0.00 0.00 14538.89 7923.90 25976.09 00:10:07.457 =================================================================================================================== 00:10:07.457 Total : 8763.26 34.23 0.00 0.00 14538.89 7923.90 25976.09 00:10:07.457 00:10:07.457 Latency(us) 00:10:07.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.457 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:07.457 Nvme1n1 : 1.00 180139.98 703.67 0.00 0.00 708.01 348.16 789.41 00:10:07.457 =================================================================================================================== 00:10:07.457 Total : 180139.98 703.67 0.00 0.00 708.01 348.16 789.41 00:10:07.715 00:10:07.715 Latency(us) 00:10:07.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.715 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:07.715 Nvme1n1 : 1.01 9538.65 37.26 0.00 0.00 13365.52 6255.71 20614.05 00:10:07.715 =================================================================================================================== 00:10:07.715 Total : 9538.65 37.26 0.00 0.00 13365.52 6255.71 20614.05 00:10:07.715 04:06:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 80205 00:10:07.715 04:06:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 80208 00:10:07.715 04:06:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 80210 00:10:07.973 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.973 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.973 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.973 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.974 rmmod nvme_tcp 00:10:07.974 rmmod nvme_fabrics 00:10:07.974 rmmod nvme_keyring 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 80168 ']' 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 80168 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 80168 ']' 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 80168 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80168 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80168' 00:10:07.974 killing process with pid 80168 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 80168 00:10:07.974 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 80168 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:08.232 00:10:08.232 real 0m4.053s 00:10:08.232 user 0m17.625s 00:10:08.232 sys 0m2.281s 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.232 ************************************ 00:10:08.232 END TEST nvmf_bdev_io_wait 00:10:08.232 ************************************ 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.232 ************************************ 00:10:08.232 START TEST nvmf_queue_depth 00:10:08.232 ************************************ 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:08.232 * Looking for test storage... 00:10:08.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.232 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:08.233 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:08.492 Cannot find device "nvmf_tgt_br" 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.492 Cannot find device "nvmf_tgt_br2" 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:08.492 Cannot find device "nvmf_tgt_br" 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:08.492 Cannot find device "nvmf_tgt_br2" 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.492 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:08.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:10:08.751 00:10:08.751 --- 10.0.0.2 ping statistics --- 00:10:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.751 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:08.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:10:08.751 00:10:08.751 --- 10.0.0.3 ping statistics --- 00:10:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.751 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:10:08.751 00:10:08.751 --- 10.0.0.1 ping statistics --- 00:10:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.751 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=80436 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 80436 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 80436 ']' 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.751 04:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.751 [2024-07-23 04:06:01.950731] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:08.751 [2024-07-23 04:06:01.950814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.751 [2024-07-23 04:06:02.074573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:09.010 [2024-07-23 04:06:02.094777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.010 [2024-07-23 04:06:02.165584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.010 [2024-07-23 04:06:02.165651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.010 [2024-07-23 04:06:02.165665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.010 [2024-07-23 04:06:02.165676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.010 [2024-07-23 04:06:02.165686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.010 [2024-07-23 04:06:02.165717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.010 [2024-07-23 04:06:02.222609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.944 [2024-07-23 04:06:02.979037] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.944 04:06:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.944 Malloc0 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.944 [2024-07-23 04:06:03.052488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=80473 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 80473 /var/tmp/bdevperf.sock 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 80473 ']' 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:09.944 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:09.945 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:09.945 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.945 04:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.945 [2024-07-23 04:06:03.112860] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:09.945 [2024-07-23 04:06:03.112973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80473 ] 00:10:09.945 [2024-07-23 04:06:03.235451] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:09.945 [2024-07-23 04:06:03.253940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.202 [2024-07-23 04:06:03.329255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.202 [2024-07-23 04:06:03.385210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:10.768 04:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.768 04:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:10.768 04:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:10.768 04:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.768 04:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:10.768 NVMe0n1 00:10:10.768 04:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.768 04:06:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:11.026 Running I/O for 10 seconds... 00:10:21.029 00:10:21.029 Latency(us) 00:10:21.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.029 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:21.029 Verification LBA range: start 0x0 length 0x4000 00:10:21.029 NVMe0n1 : 10.07 9330.25 36.45 0.00 0.00 109261.66 22043.93 78643.20 00:10:21.029 =================================================================================================================== 00:10:21.029 Total : 9330.25 36.45 0.00 0.00 109261.66 22043.93 78643.20 00:10:21.029 0 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 80473 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 80473 ']' 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 80473 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80473 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:21.029 killing process with pid 80473 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80473' 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 80473 00:10:21.029 Received shutdown signal, test time was about 10.000000 seconds 00:10:21.029 00:10:21.029 Latency(us) 00:10:21.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.029 =================================================================================================================== 00:10:21.029 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:21.029 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 80473 00:10:21.287 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:21.287 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:21.287 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:21.287 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:21.287 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:21.287 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:21.287 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:21.287 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:21.287 rmmod nvme_tcp 00:10:21.287 rmmod nvme_fabrics 00:10:21.287 rmmod nvme_keyring 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 80436 ']' 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 80436 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 80436 ']' 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 80436 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80436 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:21.545 killing process with pid 80436 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80436' 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 80436 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 80436 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.545 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:21.804 00:10:21.804 real 0m13.447s 00:10:21.804 user 0m23.305s 00:10:21.804 sys 0m2.155s 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:21.804 ************************************ 00:10:21.804 END TEST nvmf_queue_depth 00:10:21.804 ************************************ 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.804 ************************************ 00:10:21.804 START TEST nvmf_target_multipath 00:10:21.804 ************************************ 00:10:21.804 04:06:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:21.804 * Looking for test storage... 00:10:21.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:21.804 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:21.805 Cannot find device "nvmf_tgt_br" 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.805 Cannot find device "nvmf_tgt_br2" 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:21.805 Cannot find device "nvmf_tgt_br" 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:21.805 Cannot find device "nvmf_tgt_br2" 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:21.805 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:22.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:22.065 00:10:22.065 --- 10.0.0.2 ping statistics --- 00:10:22.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.065 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:22.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:22.065 00:10:22.065 --- 10.0.0.3 ping statistics --- 00:10:22.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.065 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:22.065 00:10:22.065 --- 10.0.0.1 ping statistics --- 00:10:22.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.065 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.065 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=80790 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 80790 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 80790 ']' 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.343 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.343 [2024-07-23 04:06:15.460449] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:22.343 [2024-07-23 04:06:15.460542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.343 [2024-07-23 04:06:15.580612] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:22.343 [2024-07-23 04:06:15.595673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.343 [2024-07-23 04:06:15.666501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.343 [2024-07-23 04:06:15.666554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.343 [2024-07-23 04:06:15.666581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.343 [2024-07-23 04:06:15.666588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.343 [2024-07-23 04:06:15.666594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.343 [2024-07-23 04:06:15.666734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.343 [2024-07-23 04:06:15.667411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.343 [2024-07-23 04:06:15.667576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.343 [2024-07-23 04:06:15.667580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.601 [2024-07-23 04:06:15.723395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:22.601 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.601 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:22.601 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.601 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.601 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.601 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.601 04:06:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.859 [2024-07-23 04:06:16.081690] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.859 04:06:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:23.117 Malloc0 00:10:23.117 04:06:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:23.376 04:06:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.633 04:06:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.891 [2024-07-23 04:06:17.085570] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.891 04:06:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:24.150 [2024-07-23 04:06:17.349801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:24.150 04:06:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:24.408 04:06:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:24.408 04:06:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.408 04:06:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:24.408 04:06:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.408 04:06:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:24.408 04:06:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.307 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.307 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.307 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.307 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.307 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:26.308 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:26.566 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=80872 00:10:26.566 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:26.566 04:06:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:26.566 [global] 00:10:26.566 thread=1 00:10:26.566 invalidate=1 00:10:26.566 rw=randrw 00:10:26.566 time_based=1 00:10:26.566 runtime=6 00:10:26.566 ioengine=libaio 00:10:26.566 direct=1 00:10:26.566 bs=4096 00:10:26.566 iodepth=128 00:10:26.566 norandommap=0 00:10:26.566 numjobs=1 00:10:26.566 00:10:26.566 verify_dump=1 00:10:26.566 verify_backlog=512 00:10:26.566 verify_state_save=0 00:10:26.566 do_verify=1 00:10:26.566 verify=crc32c-intel 00:10:26.566 [job0] 00:10:26.566 filename=/dev/nvme0n1 00:10:26.566 Could not set queue depth (nvme0n1) 00:10:26.566 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.566 fio-3.35 00:10:26.566 Starting 1 thread 00:10:27.500 04:06:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:27.758 04:06:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:28.016 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:28.016 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:28.016 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.016 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:28.017 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:28.275 04:06:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 80872 00:10:33.540 00:10:33.540 job0: (groupid=0, jobs=1): err= 0: pid=80893: Tue Jul 23 04:06:25 2024 00:10:33.540 read: IOPS=11.0k, BW=42.9MiB/s (45.0MB/s)(258MiB/6002msec) 00:10:33.540 slat (usec): min=7, max=8181, avg=54.38, stdev=212.31 00:10:33.540 clat (usec): min=1382, max=15850, avg=7941.46, stdev=1448.42 00:10:33.540 lat (usec): min=1395, max=15883, avg=7995.83, stdev=1451.90 00:10:33.540 clat percentiles (usec): 00:10:33.540 | 1.00th=[ 4113], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7177], 00:10:33.540 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:10:33.540 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9372], 95.00th=[11207], 00:10:33.540 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13698], 99.95th=[13960], 00:10:33.540 | 99.99th=[14353] 00:10:33.540 bw ( KiB/s): min= 7008, max=27968, per=51.87%, avg=22806.00, stdev=6050.02, samples=11 00:10:33.540 iops : min= 1752, max= 6992, avg=5701.45, stdev=1512.51, samples=11 00:10:33.540 write: IOPS=6347, BW=24.8MiB/s (26.0MB/s)(134MiB/5402msec); 0 zone resets 00:10:33.540 slat (usec): min=14, max=3321, avg=61.23, stdev=149.37 00:10:33.540 clat (usec): min=2557, max=13913, avg=6923.65, stdev=1260.23 00:10:33.540 lat (usec): min=2581, max=13936, avg=6984.89, stdev=1265.12 00:10:33.540 clat percentiles (usec): 00:10:33.540 | 1.00th=[ 3195], 5.00th=[ 4047], 10.00th=[ 5407], 20.00th=[ 6390], 00:10:33.540 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7242], 00:10:33.540 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8356], 00:10:33.540 | 99.00th=[10683], 99.50th=[11338], 99.90th=[12125], 99.95th=[12518], 00:10:33.540 | 99.99th=[13435] 00:10:33.540 bw ( KiB/s): min= 7400, max=27360, per=89.90%, avg=22826.91, stdev=5809.61, samples=11 00:10:33.540 iops : min= 1850, max= 6840, avg=5706.73, stdev=1452.40, samples=11 00:10:33.540 lat (msec) : 2=0.02%, 4=2.17%, 10=91.79%, 20=6.02% 00:10:33.540 cpu : usr=5.38%, sys=21.68%, ctx=5927, majf=0, minf=114 00:10:33.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:33.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:33.540 issued rwts: total=65978,34291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:33.540 00:10:33.540 Run status group 0 (all jobs): 00:10:33.540 READ: bw=42.9MiB/s (45.0MB/s), 42.9MiB/s-42.9MiB/s (45.0MB/s-45.0MB/s), io=258MiB (270MB), run=6002-6002msec 00:10:33.540 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=134MiB (140MB), run=5402-5402msec 00:10:33.540 00:10:33.540 Disk stats (read/write): 00:10:33.540 nvme0n1: ios=64920/33779, merge=0/0, ticks=492925/218457, in_queue=711382, util=98.58% 00:10:33.540 04:06:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=80973 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:33.540 04:06:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:33.540 [global] 00:10:33.540 thread=1 00:10:33.540 invalidate=1 00:10:33.540 rw=randrw 00:10:33.540 time_based=1 00:10:33.540 runtime=6 00:10:33.540 ioengine=libaio 00:10:33.540 direct=1 00:10:33.540 bs=4096 00:10:33.540 iodepth=128 00:10:33.540 norandommap=0 00:10:33.540 numjobs=1 00:10:33.540 00:10:33.540 verify_dump=1 00:10:33.540 verify_backlog=512 00:10:33.540 verify_state_save=0 00:10:33.540 do_verify=1 00:10:33.540 verify=crc32c-intel 00:10:33.540 [job0] 00:10:33.540 filename=/dev/nvme0n1 00:10:33.540 Could not set queue depth (nvme0n1) 00:10:33.540 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.540 fio-3.35 00:10:33.540 Starting 1 thread 00:10:34.475 04:06:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:34.475 04:06:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:34.734 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:34.992 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:35.249 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:35.249 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:35.249 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:35.249 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:35.249 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:35.250 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:35.250 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:35.250 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:35.250 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:35.250 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:35.250 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:35.250 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:35.250 04:06:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 80973 00:10:40.530 00:10:40.530 job0: (groupid=0, jobs=1): err= 0: pid=80994: Tue Jul 23 04:06:32 2024 00:10:40.530 read: IOPS=12.3k, BW=48.2MiB/s (50.6MB/s)(289MiB/6002msec) 00:10:40.530 slat (usec): min=4, max=5609, avg=42.08, stdev=178.76 00:10:40.530 clat (usec): min=580, max=13705, avg=7220.62, stdev=1742.16 00:10:40.530 lat (usec): min=594, max=13714, avg=7262.69, stdev=1756.43 00:10:40.530 clat percentiles (usec): 00:10:40.530 | 1.00th=[ 3130], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5669], 00:10:40.530 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7767], 00:10:40.530 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[10290], 00:10:40.530 | 99.00th=[11994], 99.50th=[12387], 99.90th=[12911], 99.95th=[13173], 00:10:40.530 | 99.99th=[13435] 00:10:40.530 bw ( KiB/s): min= 7616, max=41704, per=52.69%, avg=26024.73, stdev=9617.79, samples=11 00:10:40.530 iops : min= 1904, max=10426, avg=6506.18, stdev=2404.45, samples=11 00:10:40.530 write: IOPS=7203, BW=28.1MiB/s (29.5MB/s)(147MiB/5223msec); 0 zone resets 00:10:40.530 slat (usec): min=14, max=1810, avg=50.71, stdev=123.50 00:10:40.530 clat (usec): min=742, max=12996, avg=6028.52, stdev=1696.79 00:10:40.530 lat (usec): min=782, max=13024, avg=6079.23, stdev=1710.76 00:10:40.530 clat percentiles (usec): 00:10:40.530 | 1.00th=[ 2638], 5.00th=[ 3163], 10.00th=[ 3556], 20.00th=[ 4146], 00:10:40.530 | 30.00th=[ 4817], 40.00th=[ 6063], 50.00th=[ 6587], 60.00th=[ 6915], 00:10:40.530 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8029], 00:10:40.530 | 99.00th=[ 9896], 99.50th=[10683], 99.90th=[11863], 99.95th=[12125], 00:10:40.530 | 99.99th=[12780] 00:10:40.530 bw ( KiB/s): min= 8072, max=40960, per=90.28%, avg=26013.09, stdev=9359.56, samples=11 00:10:40.530 iops : min= 2018, max=10240, avg=6503.27, stdev=2339.89, samples=11 00:10:40.530 lat (usec) : 750=0.01%, 1000=0.01% 00:10:40.530 lat (msec) : 2=0.20%, 4=7.96%, 10=87.88%, 20=3.95% 00:10:40.530 cpu : usr=6.00%, sys=23.75%, ctx=6337, majf=0, minf=96 00:10:40.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:40.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.530 issued rwts: total=74107,37623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.530 00:10:40.530 Run status group 0 (all jobs): 00:10:40.530 READ: bw=48.2MiB/s (50.6MB/s), 48.2MiB/s-48.2MiB/s (50.6MB/s-50.6MB/s), io=289MiB (304MB), run=6002-6002msec 00:10:40.530 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=147MiB (154MB), run=5223-5223msec 00:10:40.530 00:10:40.530 Disk stats (read/write): 00:10:40.530 nvme0n1: ios=72658/37623, merge=0/0, ticks=496780/211160, in_queue=707940, util=98.65% 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:40.530 04:06:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.530 rmmod nvme_tcp 00:10:40.530 rmmod nvme_fabrics 00:10:40.530 rmmod nvme_keyring 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 80790 ']' 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 80790 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 80790 ']' 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 80790 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80790 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.530 killing process with pid 80790 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80790' 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 80790 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 80790 00:10:40.530 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:40.531 00:10:40.531 real 0m18.580s 00:10:40.531 user 1m9.040s 00:10:40.531 sys 0m9.960s 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.531 ************************************ 00:10:40.531 END TEST nvmf_target_multipath 00:10:40.531 ************************************ 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.531 ************************************ 00:10:40.531 START TEST nvmf_zcopy 00:10:40.531 ************************************ 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:40.531 * Looking for test storage... 00:10:40.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:40.531 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:40.532 Cannot find device "nvmf_tgt_br" 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.532 Cannot find device "nvmf_tgt_br2" 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:40.532 Cannot find device "nvmf_tgt_br" 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:40.532 Cannot find device "nvmf_tgt_br2" 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.532 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.821 04:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:40.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:40.821 00:10:40.821 --- 10.0.0.2 ping statistics --- 00:10:40.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.821 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:40.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:10:40.821 00:10:40.821 --- 10.0.0.3 ping statistics --- 00:10:40.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.821 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:40.821 00:10:40.821 --- 10.0.0.1 ping statistics --- 00:10:40.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.821 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=81240 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 81240 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 81240 ']' 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.821 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.821 [2024-07-23 04:06:34.097247] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:40.821 [2024-07-23 04:06:34.097324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.080 [2024-07-23 04:06:34.215498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:41.080 [2024-07-23 04:06:34.231753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.080 [2024-07-23 04:06:34.289491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.080 [2024-07-23 04:06:34.289555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.080 [2024-07-23 04:06:34.289565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.080 [2024-07-23 04:06:34.289572] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.080 [2024-07-23 04:06:34.289578] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.080 [2024-07-23 04:06:34.289605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.080 [2024-07-23 04:06:34.340151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:41.080 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.080 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:41.080 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.080 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.080 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.338 [2024-07-23 04:06:34.447580] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.338 [2024-07-23 04:06:34.463684] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.338 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.338 malloc0 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:41.339 { 00:10:41.339 "params": { 00:10:41.339 "name": "Nvme$subsystem", 00:10:41.339 "trtype": "$TEST_TRANSPORT", 00:10:41.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.339 "adrfam": "ipv4", 00:10:41.339 "trsvcid": "$NVMF_PORT", 00:10:41.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.339 "hdgst": ${hdgst:-false}, 00:10:41.339 "ddgst": ${ddgst:-false} 00:10:41.339 }, 00:10:41.339 "method": "bdev_nvme_attach_controller" 00:10:41.339 } 00:10:41.339 EOF 00:10:41.339 )") 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:41.339 04:06:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:41.339 "params": { 00:10:41.339 "name": "Nvme1", 00:10:41.339 "trtype": "tcp", 00:10:41.339 "traddr": "10.0.0.2", 00:10:41.339 "adrfam": "ipv4", 00:10:41.339 "trsvcid": "4420", 00:10:41.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.339 "hdgst": false, 00:10:41.339 "ddgst": false 00:10:41.339 }, 00:10:41.339 "method": "bdev_nvme_attach_controller" 00:10:41.339 }' 00:10:41.339 [2024-07-23 04:06:34.558356] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:41.339 [2024-07-23 04:06:34.558445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81264 ] 00:10:41.597 [2024-07-23 04:06:34.681761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:41.597 [2024-07-23 04:06:34.699070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.597 [2024-07-23 04:06:34.755144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.597 [2024-07-23 04:06:34.814711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:41.597 Running I/O for 10 seconds... 00:10:53.807 00:10:53.807 Latency(us) 00:10:53.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.807 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:53.807 Verification LBA range: start 0x0 length 0x1000 00:10:53.807 Nvme1n1 : 10.01 7044.21 55.03 0.00 0.00 18116.41 2338.44 28359.21 00:10:53.807 =================================================================================================================== 00:10:53.807 Total : 7044.21 55.03 0.00 0.00 18116.41 2338.44 28359.21 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=81382 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:53.807 { 00:10:53.807 "params": { 00:10:53.807 "name": "Nvme$subsystem", 00:10:53.807 "trtype": "$TEST_TRANSPORT", 00:10:53.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.807 "adrfam": "ipv4", 00:10:53.807 "trsvcid": "$NVMF_PORT", 00:10:53.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.807 "hdgst": ${hdgst:-false}, 00:10:53.807 "ddgst": ${ddgst:-false} 00:10:53.807 }, 00:10:53.807 "method": "bdev_nvme_attach_controller" 00:10:53.807 } 00:10:53.807 EOF 00:10:53.807 )") 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:53.807 [2024-07-23 04:06:45.141604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.141675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:53.807 04:06:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:53.807 "params": { 00:10:53.807 "name": "Nvme1", 00:10:53.807 "trtype": "tcp", 00:10:53.807 "traddr": "10.0.0.2", 00:10:53.807 "adrfam": "ipv4", 00:10:53.807 "trsvcid": "4420", 00:10:53.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.807 "hdgst": false, 00:10:53.807 "ddgst": false 00:10:53.807 }, 00:10:53.807 "method": "bdev_nvme_attach_controller" 00:10:53.807 }' 00:10:53.807 [2024-07-23 04:06:45.157558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.157587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.165556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.165583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.173558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.173586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.179491] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:10:53.807 [2024-07-23 04:06:45.179567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81382 ] 00:10:53.807 [2024-07-23 04:06:45.181579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.181814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.193565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.193749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.205565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.205704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.217568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.217705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.229571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.229600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.241571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.241598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.253571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.253597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.265574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.265599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.277577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.277603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.289603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.289635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.297738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:53.807 [2024-07-23 04:06:45.301594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.301807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.309594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.309801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.316123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.807 [2024-07-23 04:06:45.317615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.807 [2024-07-23 04:06:45.317812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.807 [2024-07-23 04:06:45.325604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.325815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.333603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.333743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.341603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.341780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.349603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.349773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.357603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.357772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.365617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.365804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.377611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.377791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.385613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.385783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.390439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.808 [2024-07-23 04:06:45.397615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.397789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.409622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.409800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.417617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.417788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.425623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.425798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.433619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.433794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.441618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.441790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.449622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.449790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.453006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:53.808 [2024-07-23 04:06:45.457632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.457820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.465631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.465803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.473635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.473664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.481632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.481659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.489738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.489770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.497739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.497772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.505740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.505772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.513785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.513817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.521751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.521802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.529788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.529821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.537780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.537812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.545778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.545807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.553806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.553842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 Running I/O for 5 seconds... 00:10:53.808 [2024-07-23 04:06:45.561804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.561828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.569830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.569864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.582771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.582806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.594167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.594202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.602518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.602554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.613928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.613971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.628989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.629024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.646209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.646242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.663137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.663173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.673880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.673952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.689927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.689970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.706401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.706436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.717392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.717425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.732847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.732882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.750367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.750400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.759166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.759201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.774603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.774638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.786316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.786347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.802800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.802834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.812830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.812864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.822875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.822991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.836216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.836265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.844701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.844734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.858785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.858818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.808 [2024-07-23 04:06:45.867488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.808 [2024-07-23 04:06:45.867521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.880953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.880987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.889200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.889233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.903552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.903584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.912179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.912212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.922191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.922226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.931815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.931849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.946388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.946422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.955181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.955216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.967293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.967327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.981467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.981500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:45.989798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:45.989832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.005106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.005141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.014359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.014392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.030346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.030381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.048164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.048199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.063364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.063398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.072117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.072150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.084293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.084327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.093939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.094007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.105352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.105387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.121775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.121810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.138094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.138129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.149428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.149462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.165960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.166029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.181314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.181347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.190212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.190246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.206951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.206985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.223777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.223810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.235231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.235280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.243236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.243271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.254516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.254549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.263767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.263800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.275270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.275304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.283858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.283916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.292992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.293025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.302205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.302253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.311700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.311766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.322490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.322525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.335809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.335847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.353731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.353766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.367857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.367890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.384091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.384124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.394091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.394126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.410791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.410825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.425755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.425791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.435678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.435712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.449968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.450011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.467949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.468009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.477698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.477731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.487819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.487851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.497715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.497749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.511898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.809 [2024-07-23 04:06:46.511975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.809 [2024-07-23 04:06:46.520466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.520499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.531535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.531568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.542770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.542806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.551043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.551079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.562168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.562202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.572939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.572973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.581036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.581070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.591809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.591842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.600607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.600640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.611570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.611603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.620221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.620270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.630086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.630120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.639512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.639545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.649125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.649161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.662080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.662112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.670331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.670365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.685846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.685879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.694637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.694669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.706998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.707033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.717698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.717733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.725897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.725992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.741089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.741132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.749232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.749282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.763531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.763564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.779203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.779270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.790141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.790175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.797923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.798002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.810416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.810450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.820327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.820362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.831505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.831539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.841965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.842024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.852225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.852292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.861868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.861946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.871136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.871171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.884529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.884563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.892811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.892844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.904349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.904382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.921650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.921684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.938752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.938786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.954161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.954197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.965179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.965212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.973270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.973303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.984340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.984373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:46.995556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:46.995588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:47.010893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:47.010980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:47.028187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:47.028221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:47.037117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.810 [2024-07-23 04:06:47.037150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.810 [2024-07-23 04:06:47.051081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.051116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.811 [2024-07-23 04:06:47.059334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.059368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.811 [2024-07-23 04:06:47.073453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.073486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.811 [2024-07-23 04:06:47.082265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.082297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.811 [2024-07-23 04:06:47.093109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.093144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.811 [2024-07-23 04:06:47.104850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.104940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.811 [2024-07-23 04:06:47.112954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.112988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.811 [2024-07-23 04:06:47.127888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.127968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.811 [2024-07-23 04:06:47.136433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.811 [2024-07-23 04:06:47.136466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.147828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.147862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.159376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.159409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.167932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.167991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.177524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.177557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.191769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.191809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.202020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.202071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.212322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.212357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.222585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.222620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.232301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.232335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.242154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.242188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.251950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.251994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.261937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.261970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.272430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.272463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.281751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.281785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.293221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.293270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.301285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.301318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.312895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.312956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.324598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.324631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.340682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.340748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.355819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.355856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.364661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.364725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.376067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.376102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.387336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.387369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.071 [2024-07-23 04:06:47.403610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.071 [2024-07-23 04:06:47.403643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.420800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.420834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.431646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.431678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.447571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.447604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.464845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.464881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.475599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.475633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.491154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.491190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.501882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.501961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.518593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.518628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.532960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.533019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.541871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.541948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.554119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.554154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.563675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.563712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.574336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.574370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.586594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.586628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.597861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.597922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.605650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.605683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.617473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.617506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.628782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.628817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.637380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.637413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.647206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.330 [2024-07-23 04:06:47.647241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.330 [2024-07-23 04:06:47.656782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.331 [2024-07-23 04:06:47.656817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.331 [2024-07-23 04:06:47.665828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.331 [2024-07-23 04:06:47.665860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.674791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.674824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.684016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.684050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.693117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.693150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.702334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.702367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.711561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.711594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.720897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.720976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.730033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.730066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.743423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.743456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.751633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.751666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.763151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.763187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.774545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.774578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.783088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.783122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.795172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.795207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.804287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.804320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.813427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.813459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.822717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.822751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.833039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.833076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.844134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.844170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.854419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.854452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.864375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.864408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.874281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.874314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.883470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.883502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.892577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.892610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.901723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.901756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.911291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.911357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.920717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.920752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.589 [2024-07-23 04:06:47.930260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.589 [2024-07-23 04:06:47.930307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:47.939665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:47.939698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:47.948873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:47.948951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:47.958033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:47.958067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:47.967103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:47.967138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:47.976404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:47.976437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:47.989587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:47.989620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:47.997791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:47.997825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.009275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.009309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.018410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.018442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.029793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.029825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.044454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.044487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.052456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.052488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.068211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.068260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.076836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.076870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.093156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.093190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.103872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.103949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.119474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.119507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.130471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.130503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.138871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.138972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.149977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.150009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.167157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.167193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.848 [2024-07-23 04:06:48.182787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.848 [2024-07-23 04:06:48.182818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.194161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.194195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.209710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.209744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.220571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.220605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.236412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.236446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.247866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.247944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.255835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.255868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.267612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.267644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.279085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.279121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.294239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.294273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.303188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.303253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.319272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.319307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.106 [2024-07-23 04:06:48.337617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.106 [2024-07-23 04:06:48.337651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.347554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.347588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.362428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.362462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.380436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.380470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.390430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.390463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.404710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.404744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.413206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.413241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.423329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.423363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.432593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.432626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.107 [2024-07-23 04:06:48.441625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.107 [2024-07-23 04:06:48.441657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.365 [2024-07-23 04:06:48.450812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.365 [2024-07-23 04:06:48.450845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.365 [2024-07-23 04:06:48.460342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.365 [2024-07-23 04:06:48.460375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.365 [2024-07-23 04:06:48.469472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.365 [2024-07-23 04:06:48.469504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.365 [2024-07-23 04:06:48.479020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.365 [2024-07-23 04:06:48.479054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.365 [2024-07-23 04:06:48.488322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.365 [2024-07-23 04:06:48.488356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.365 [2024-07-23 04:06:48.497372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.497406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.506574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.506606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.515738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.515773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.525169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.525204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.534578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.534611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.543978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.544011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.553298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.553331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.566736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.566771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.582703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.582737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.598487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.598522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.608139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.608175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.619119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.619157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.635847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.635881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.651834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.651869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.660917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.660983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.673241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.673291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.683288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.683352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.697226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.697276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.366 [2024-07-23 04:06:48.707212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.366 [2024-07-23 04:06:48.707280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.721369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.721404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.732815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.732850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.741163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.741197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.752766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.752801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.763305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.763338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.777699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.777750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.788476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.788513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.798563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.798596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.624 [2024-07-23 04:06:48.810992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.624 [2024-07-23 04:06:48.811029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.819343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.819376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.831652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.831685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.841283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.841316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.856175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.856212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.873096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.873132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.882870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.882965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.893340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.893375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.904595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.904629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.913839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.913873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.924018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.924052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.933454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.933489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.943098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.943133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.952686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.952719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.625 [2024-07-23 04:06:48.962437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.625 [2024-07-23 04:06:48.962472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.882 [2024-07-23 04:06:48.972430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.882 [2024-07-23 04:06:48.972480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.882 [2024-07-23 04:06:48.982247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.882 [2024-07-23 04:06:48.982296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.882 [2024-07-23 04:06:48.992006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:48.992040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.018543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.018578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.027887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.027967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.044918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.044980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.062261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.062296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.078217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.078252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.095498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.095532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.111616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.111650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.128663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.128702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.144687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.144723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.161391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.161426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.179598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.179632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.193655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.193687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.883 [2024-07-23 04:06:49.208964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.883 [2024-07-23 04:06:49.208999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.227202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.227239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.242626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.242662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.267668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.267789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.285016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.285072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.298168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.298213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.317188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.317251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.334797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.334862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.352099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.352145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.367806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.367859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.384874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.384954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.400736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.400781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.416635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.416682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.432442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.432487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.448074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.448133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.464805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.464838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.141 [2024-07-23 04:06:49.482461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.141 [2024-07-23 04:06:49.482496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.495925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.495976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.512433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.512464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.529208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.529250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.545655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.545682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.562266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.562294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.579574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.579618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.594649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.594698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.605987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.606018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.623014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.623082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.638540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.638570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.655758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.655813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.671425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.671479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.682816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.682857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.698190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.698228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.714725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.714779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.400 [2024-07-23 04:06:49.731572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.400 [2024-07-23 04:06:49.731621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.658 [2024-07-23 04:06:49.748700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.748737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.765340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.765387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.782720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.782777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.798067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.798100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.815696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.815751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.831059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.831123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.848505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.848554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.864119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.864146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.883130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.883162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.896706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.896733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.913123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.913163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.928555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.928583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.946275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.946301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.961877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.961912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.978070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.978097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.659 [2024-07-23 04:06:49.994332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.659 [2024-07-23 04:06:49.994359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.011151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.011181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.027225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.027254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.044035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.044062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.060308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.060335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.076527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.076554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.093628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.093656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.109750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.109777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.126160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.126187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.143582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.143609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.160260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.160290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.175595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.175625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.191762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.191790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.207957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.207997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.224169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.224197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.241941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.241968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.917 [2024-07-23 04:06:50.257636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.917 [2024-07-23 04:06:50.257664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.275400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.275428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.291607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.291635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.309617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.309646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.325055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.325083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.342563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.342590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.356670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.356701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.371153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.371188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.386781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.386838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.404474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.404512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.420795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.420827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.438387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.438418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.453917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.453962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.463128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.463160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.478489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.478517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.494114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.494141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.177 [2024-07-23 04:06:50.510947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.177 [2024-07-23 04:06:50.510974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.436 [2024-07-23 04:06:50.526845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.436 [2024-07-23 04:06:50.526873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.436 [2024-07-23 04:06:50.537939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.436 [2024-07-23 04:06:50.537981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.436 [2024-07-23 04:06:50.554234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.436 [2024-07-23 04:06:50.554262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.436 [2024-07-23 04:06:50.570485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.436 [2024-07-23 04:06:50.570514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.436 00:10:57.436 Latency(us) 00:10:57.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.437 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:57.437 Nvme1n1 : 5.01 13053.03 101.98 0.00 0.00 9793.07 4051.32 24427.05 00:10:57.437 =================================================================================================================== 00:10:57.437 Total : 13053.03 101.98 0.00 0.00 9793.07 4051.32 24427.05 00:10:57.437 [2024-07-23 04:06:50.582318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.582345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.594324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.594361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.606311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.606337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.618311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.618336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.630312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.630336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.642319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.642342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.654326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.654357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.666325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.666348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.678327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.678351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.690339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.690364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.710347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.710383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.722364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.722399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.734347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.734372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.746352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.746376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.758368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.758391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 [2024-07-23 04:06:50.770357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.437 [2024-07-23 04:06:50.770383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.437 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (81382) - No such process 00:10:57.437 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 81382 00:10:57.437 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.437 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.437 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.696 delay0 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.696 04:06:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:57.696 [2024-07-23 04:06:50.955496] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:05.812 Initializing NVMe Controllers 00:11:05.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:05.812 Initialization complete. Launching workers. 00:11:05.812 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 267, failed: 19750 00:11:05.812 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19933, failed to submit 84 00:11:05.812 success 19809, unsuccess 124, failed 0 00:11:05.812 04:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:05.812 04:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:05.812 04:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.812 04:06:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.812 rmmod nvme_tcp 00:11:05.812 rmmod nvme_fabrics 00:11:05.812 rmmod nvme_keyring 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 81240 ']' 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 81240 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 81240 ']' 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 81240 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81240 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:05.812 killing process with pid 81240 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81240' 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 81240 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 81240 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:05.812 00:11:05.812 real 0m24.868s 00:11:05.812 user 0m39.792s 00:11:05.812 sys 0m7.808s 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.812 ************************************ 00:11:05.812 END TEST nvmf_zcopy 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.812 ************************************ 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.812 ************************************ 00:11:05.812 START TEST nvmf_nmic 00:11:05.812 ************************************ 00:11:05.812 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:05.812 * Looking for test storage... 00:11:05.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:05.813 Cannot find device "nvmf_tgt_br" 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:05.813 Cannot find device "nvmf_tgt_br2" 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:05.813 Cannot find device "nvmf_tgt_br" 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:05.813 Cannot find device "nvmf_tgt_br2" 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:05.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:05.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:05.813 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:05.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:05.814 00:11:05.814 --- 10.0.0.2 ping statistics --- 00:11:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.814 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:05.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:05.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:11:05.814 00:11:05.814 --- 10.0.0.3 ping statistics --- 00:11:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.814 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:05.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:05.814 00:11:05.814 --- 10.0.0.1 ping statistics --- 00:11:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.814 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=81712 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 81712 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 81712 ']' 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.814 04:06:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.814 [2024-07-23 04:06:58.995502] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:11:05.814 [2024-07-23 04:06:58.995574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.814 [2024-07-23 04:06:59.114764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:05.814 [2024-07-23 04:06:59.131176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.073 [2024-07-23 04:06:59.197383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.073 [2024-07-23 04:06:59.197715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.073 [2024-07-23 04:06:59.197854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.073 [2024-07-23 04:06:59.198026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.073 [2024-07-23 04:06:59.198072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.073 [2024-07-23 04:06:59.198280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.073 [2024-07-23 04:06:59.198434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.073 [2024-07-23 04:06:59.198674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.073 [2024-07-23 04:06:59.198673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.073 [2024-07-23 04:06:59.255000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.639 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.639 [2024-07-23 04:06:59.962798] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.897 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.897 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.897 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.897 04:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 Malloc0 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 [2024-07-23 04:07:00.035864] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.897 test case1: single bdev can't be used in multiple subsystems 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.897 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.897 [2024-07-23 04:07:00.059759] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:06.898 [2024-07-23 04:07:00.059805] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:06.898 [2024-07-23 04:07:00.059824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.898 request: 00:11:06.898 { 00:11:06.898 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:06.898 "namespace": { 00:11:06.898 "bdev_name": "Malloc0", 00:11:06.898 "no_auto_visible": false 00:11:06.898 }, 00:11:06.898 "method": "nvmf_subsystem_add_ns", 00:11:06.898 "req_id": 1 00:11:06.898 } 00:11:06.898 Got JSON-RPC error response 00:11:06.898 response: 00:11:06.898 { 00:11:06.898 "code": -32602, 00:11:06.898 "message": "Invalid parameters" 00:11:06.898 } 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:06.898 Adding namespace failed - expected result. 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:06.898 test case2: host connect to nvmf target in multiple paths 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.898 [2024-07-23 04:07:00.071846] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.898 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:07.157 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.157 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:07.157 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.157 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:07.157 04:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:09.059 04:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:09.059 04:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:09.059 04:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.059 04:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:09.059 04:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.059 04:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:09.059 04:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:09.059 [global] 00:11:09.059 thread=1 00:11:09.059 invalidate=1 00:11:09.059 rw=write 00:11:09.059 time_based=1 00:11:09.059 runtime=1 00:11:09.059 ioengine=libaio 00:11:09.059 direct=1 00:11:09.059 bs=4096 00:11:09.059 iodepth=1 00:11:09.059 norandommap=0 00:11:09.059 numjobs=1 00:11:09.059 00:11:09.059 verify_dump=1 00:11:09.059 verify_backlog=512 00:11:09.059 verify_state_save=0 00:11:09.059 do_verify=1 00:11:09.059 verify=crc32c-intel 00:11:09.059 [job0] 00:11:09.059 filename=/dev/nvme0n1 00:11:09.319 Could not set queue depth (nvme0n1) 00:11:09.319 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.319 fio-3.35 00:11:09.319 Starting 1 thread 00:11:10.694 00:11:10.694 job0: (groupid=0, jobs=1): err= 0: pid=81804: Tue Jul 23 04:07:03 2024 00:11:10.694 read: IOPS=2545, BW=9.94MiB/s (10.4MB/s)(9.95MiB/1001msec) 00:11:10.694 slat (nsec): min=10854, max=72502, avg=14006.33, stdev=5301.92 00:11:10.694 clat (usec): min=133, max=820, avg=221.59, stdev=44.85 00:11:10.694 lat (usec): min=145, max=835, avg=235.60, stdev=45.38 00:11:10.694 clat percentiles (usec): 00:11:10.694 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 190], 00:11:10.694 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 229], 00:11:10.694 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 285], 00:11:10.694 | 99.00th=[ 326], 99.50th=[ 400], 99.90th=[ 693], 99.95th=[ 758], 00:11:10.694 | 99.99th=[ 824] 00:11:10.694 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:10.694 slat (usec): min=13, max=105, avg=20.11, stdev= 7.42 00:11:10.694 clat (usec): min=76, max=406, avg=132.47, stdev=29.11 00:11:10.694 lat (usec): min=93, max=429, avg=152.58, stdev=30.42 00:11:10.694 clat percentiles (usec): 00:11:10.694 | 1.00th=[ 88], 5.00th=[ 94], 10.00th=[ 98], 20.00th=[ 108], 00:11:10.694 | 30.00th=[ 116], 40.00th=[ 124], 50.00th=[ 130], 60.00th=[ 137], 00:11:10.694 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 184], 00:11:10.694 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 338], 99.95th=[ 379], 00:11:10.694 | 99.99th=[ 408] 00:11:10.694 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:10.694 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:10.694 lat (usec) : 100=6.15%, 250=84.01%, 500=9.71%, 750=0.10%, 1000=0.04% 00:11:10.694 cpu : usr=2.10%, sys=6.70%, ctx=5108, majf=0, minf=2 00:11:10.694 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.694 issued rwts: total=2548,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.694 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.694 00:11:10.694 Run status group 0 (all jobs): 00:11:10.694 READ: bw=9.94MiB/s (10.4MB/s), 9.94MiB/s-9.94MiB/s (10.4MB/s-10.4MB/s), io=9.95MiB (10.4MB), run=1001-1001msec 00:11:10.694 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:10.694 00:11:10.694 Disk stats (read/write): 00:11:10.694 nvme0n1: ios=2179/2560, merge=0/0, ticks=503/375, in_queue=878, util=91.40% 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.694 rmmod nvme_tcp 00:11:10.694 rmmod nvme_fabrics 00:11:10.694 rmmod nvme_keyring 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 81712 ']' 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 81712 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 81712 ']' 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 81712 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81712 00:11:10.694 killing process with pid 81712 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81712' 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 81712 00:11:10.694 04:07:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 81712 00:11:10.952 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:10.953 00:11:10.953 real 0m5.694s 00:11:10.953 user 0m18.885s 00:11:10.953 sys 0m1.737s 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.953 ************************************ 00:11:10.953 END TEST nvmf_nmic 00:11:10.953 ************************************ 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:10.953 ************************************ 00:11:10.953 START TEST nvmf_fio_target 00:11:10.953 ************************************ 00:11:10.953 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:11.212 * Looking for test storage... 00:11:11.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.212 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:11.213 Cannot find device "nvmf_tgt_br" 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.213 Cannot find device "nvmf_tgt_br2" 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:11.213 Cannot find device "nvmf_tgt_br" 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:11.213 Cannot find device "nvmf_tgt_br2" 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.213 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.471 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.471 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.471 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.471 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:11.471 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:11.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:11:11.472 00:11:11.472 --- 10.0.0.2 ping statistics --- 00:11:11.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.472 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:11.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:11:11.472 00:11:11.472 --- 10.0.0.3 ping statistics --- 00:11:11.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.472 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:11.472 00:11:11.472 --- 10.0.0.1 ping statistics --- 00:11:11.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.472 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=81981 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 81981 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 81981 ']' 00:11:11.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.472 04:07:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.472 [2024-07-23 04:07:04.805312] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:11:11.472 [2024-07-23 04:07:04.805515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.731 [2024-07-23 04:07:04.926515] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:11.731 [2024-07-23 04:07:04.941414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.731 [2024-07-23 04:07:05.023983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.731 [2024-07-23 04:07:05.024342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.731 [2024-07-23 04:07:05.024506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.731 [2024-07-23 04:07:05.024782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.731 [2024-07-23 04:07:05.024835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.731 [2024-07-23 04:07:05.025036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.731 [2024-07-23 04:07:05.025423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.731 [2024-07-23 04:07:05.025428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.731 [2024-07-23 04:07:05.025431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.990 [2024-07-23 04:07:05.108759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:12.556 04:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.556 04:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:12.556 04:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.556 04:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.556 04:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.556 04:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.556 04:07:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:12.815 [2024-07-23 04:07:06.036224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.815 04:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.073 04:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:13.073 04:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.331 04:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:13.589 04:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.848 04:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:13.848 04:07:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.106 04:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:14.106 04:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:14.364 04:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.623 04:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:14.623 04:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.880 04:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:14.880 04:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.138 04:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:15.138 04:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:15.396 04:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.654 04:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:15.654 04:07:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.911 04:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:15.911 04:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.911 04:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.169 [2024-07-23 04:07:09.493751] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.169 04:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:16.426 04:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:16.684 04:07:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.684 04:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:16.684 04:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.684 04:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.684 04:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:16.684 04:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:16.684 04:07:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:19.210 04:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:19.210 04:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:19.210 04:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.210 04:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:19.210 04:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.210 04:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:19.210 04:07:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:19.210 [global] 00:11:19.210 thread=1 00:11:19.210 invalidate=1 00:11:19.210 rw=write 00:11:19.210 time_based=1 00:11:19.210 runtime=1 00:11:19.210 ioengine=libaio 00:11:19.210 direct=1 00:11:19.210 bs=4096 00:11:19.210 iodepth=1 00:11:19.210 norandommap=0 00:11:19.210 numjobs=1 00:11:19.210 00:11:19.210 verify_dump=1 00:11:19.210 verify_backlog=512 00:11:19.210 verify_state_save=0 00:11:19.210 do_verify=1 00:11:19.210 verify=crc32c-intel 00:11:19.210 [job0] 00:11:19.210 filename=/dev/nvme0n1 00:11:19.210 [job1] 00:11:19.210 filename=/dev/nvme0n2 00:11:19.210 [job2] 00:11:19.210 filename=/dev/nvme0n3 00:11:19.210 [job3] 00:11:19.210 filename=/dev/nvme0n4 00:11:19.210 Could not set queue depth (nvme0n1) 00:11:19.210 Could not set queue depth (nvme0n2) 00:11:19.210 Could not set queue depth (nvme0n3) 00:11:19.210 Could not set queue depth (nvme0n4) 00:11:19.210 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.210 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.210 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.210 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.210 fio-3.35 00:11:19.210 Starting 4 threads 00:11:20.142 00:11:20.142 job0: (groupid=0, jobs=1): err= 0: pid=82165: Tue Jul 23 04:07:13 2024 00:11:20.142 read: IOPS=2389, BW=9558KiB/s (9788kB/s)(9568KiB/1001msec) 00:11:20.142 slat (usec): min=11, max=126, avg=16.53, stdev= 6.83 00:11:20.142 clat (usec): min=131, max=511, avg=208.01, stdev=34.25 00:11:20.142 lat (usec): min=144, max=524, avg=224.55, stdev=34.83 00:11:20.142 clat percentiles (usec): 00:11:20.142 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 180], 00:11:20.142 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 212], 00:11:20.142 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 273], 00:11:20.142 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 355], 99.95th=[ 367], 00:11:20.142 | 99.99th=[ 510] 00:11:20.142 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:20.142 slat (usec): min=14, max=135, avg=24.81, stdev= 8.24 00:11:20.142 clat (usec): min=56, max=276, avg=152.39, stdev=32.10 00:11:20.142 lat (usec): min=111, max=333, avg=177.19, stdev=33.61 00:11:20.142 clat percentiles (usec): 00:11:20.142 | 1.00th=[ 100], 5.00th=[ 109], 10.00th=[ 115], 20.00th=[ 124], 00:11:20.142 | 30.00th=[ 133], 40.00th=[ 141], 50.00th=[ 149], 60.00th=[ 157], 00:11:20.142 | 70.00th=[ 167], 80.00th=[ 178], 90.00th=[ 198], 95.00th=[ 212], 00:11:20.142 | 99.00th=[ 239], 99.50th=[ 255], 99.90th=[ 265], 99.95th=[ 269], 00:11:20.142 | 99.99th=[ 277] 00:11:20.142 bw ( KiB/s): min=11616, max=11616, per=31.52%, avg=11616.00, stdev= 0.00, samples=1 00:11:20.142 iops : min= 2904, max= 2904, avg=2904.00, stdev= 0.00, samples=1 00:11:20.142 lat (usec) : 100=0.63%, 250=93.42%, 500=5.94%, 750=0.02% 00:11:20.142 cpu : usr=2.30%, sys=7.70%, ctx=4954, majf=0, minf=5 00:11:20.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.142 issued rwts: total=2392,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.142 job1: (groupid=0, jobs=1): err= 0: pid=82166: Tue Jul 23 04:07:13 2024 00:11:20.142 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:20.142 slat (usec): min=11, max=109, avg=15.35, stdev= 5.28 00:11:20.142 clat (usec): min=178, max=721, avg=245.03, stdev=36.64 00:11:20.142 lat (usec): min=191, max=738, avg=260.38, stdev=37.05 00:11:20.142 clat percentiles (usec): 00:11:20.142 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 217], 00:11:20.142 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 249], 00:11:20.142 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 310], 00:11:20.142 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 506], 99.95th=[ 660], 00:11:20.142 | 99.99th=[ 725] 00:11:20.142 write: IOPS=2176, BW=8707KiB/s (8916kB/s)(8716KiB/1001msec); 0 zone resets 00:11:20.142 slat (usec): min=15, max=102, avg=23.26, stdev= 7.36 00:11:20.142 clat (usec): min=129, max=646, avg=187.32, stdev=32.44 00:11:20.142 lat (usec): min=147, max=664, avg=210.58, stdev=33.21 00:11:20.142 clat percentiles (usec): 00:11:20.142 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 163], 00:11:20.142 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 190], 00:11:20.142 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 231], 95.00th=[ 247], 00:11:20.142 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 363], 99.95th=[ 367], 00:11:20.142 | 99.99th=[ 644] 00:11:20.142 bw ( KiB/s): min= 8832, max= 8832, per=23.97%, avg=8832.00, stdev= 0.00, samples=1 00:11:20.142 iops : min= 2208, max= 2208, avg=2208.00, stdev= 0.00, samples=1 00:11:20.142 lat (usec) : 250=79.39%, 500=20.51%, 750=0.09% 00:11:20.142 cpu : usr=1.40%, sys=6.70%, ctx=4227, majf=0, minf=12 00:11:20.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.142 issued rwts: total=2048,2179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.142 job2: (groupid=0, jobs=1): err= 0: pid=82167: Tue Jul 23 04:07:13 2024 00:11:20.142 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:20.142 slat (nsec): min=12886, max=90276, avg=16888.80, stdev=5480.23 00:11:20.142 clat (usec): min=144, max=1724, avg=235.70, stdev=51.11 00:11:20.142 lat (usec): min=164, max=1742, avg=252.59, stdev=51.27 00:11:20.142 clat percentiles (usec): 00:11:20.142 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 202], 00:11:20.142 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 241], 00:11:20.142 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 306], 00:11:20.142 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 433], 99.95th=[ 469], 00:11:20.142 | 99.99th=[ 1729] 00:11:20.142 write: IOPS=2226, BW=8907KiB/s (9121kB/s)(8916KiB/1001msec); 0 zone resets 00:11:20.142 slat (usec): min=18, max=118, avg=25.94, stdev= 8.08 00:11:20.142 clat (usec): min=99, max=529, avg=186.76, stdev=41.38 00:11:20.142 lat (usec): min=126, max=552, avg=212.70, stdev=42.20 00:11:20.142 clat percentiles (usec): 00:11:20.142 | 1.00th=[ 119], 5.00th=[ 128], 10.00th=[ 139], 20.00th=[ 153], 00:11:20.142 | 30.00th=[ 163], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 192], 00:11:20.142 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 237], 95.00th=[ 258], 00:11:20.142 | 99.00th=[ 302], 99.50th=[ 351], 99.90th=[ 404], 99.95th=[ 498], 00:11:20.142 | 99.99th=[ 529] 00:11:20.142 bw ( KiB/s): min= 9224, max= 9224, per=25.03%, avg=9224.00, stdev= 0.00, samples=1 00:11:20.142 iops : min= 2306, max= 2306, avg=2306.00, stdev= 0.00, samples=1 00:11:20.142 lat (usec) : 100=0.02%, 250=80.99%, 500=18.94%, 750=0.02% 00:11:20.142 lat (msec) : 2=0.02% 00:11:20.142 cpu : usr=2.30%, sys=6.70%, ctx=4277, majf=0, minf=7 00:11:20.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.142 issued rwts: total=2048,2229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.142 job3: (groupid=0, jobs=1): err= 0: pid=82168: Tue Jul 23 04:07:13 2024 00:11:20.142 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:20.142 slat (nsec): min=11881, max=82591, avg=16172.27, stdev=5726.01 00:11:20.142 clat (usec): min=146, max=643, avg=235.61, stdev=39.29 00:11:20.142 lat (usec): min=159, max=660, avg=251.78, stdev=39.48 00:11:20.142 clat percentiles (usec): 00:11:20.142 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 204], 00:11:20.142 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 243], 00:11:20.142 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 306], 00:11:20.142 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 379], 99.95th=[ 429], 00:11:20.142 | 99.99th=[ 644] 00:11:20.142 write: IOPS=2251, BW=9007KiB/s (9223kB/s)(9016KiB/1001msec); 0 zone resets 00:11:20.142 slat (usec): min=14, max=104, avg=25.47, stdev= 8.02 00:11:20.142 clat (usec): min=103, max=383, avg=185.77, stdev=38.86 00:11:20.142 lat (usec): min=129, max=406, avg=211.24, stdev=39.54 00:11:20.142 clat percentiles (usec): 00:11:20.142 | 1.00th=[ 116], 5.00th=[ 130], 10.00th=[ 139], 20.00th=[ 153], 00:11:20.142 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 192], 00:11:20.142 | 70.00th=[ 202], 80.00th=[ 217], 90.00th=[ 237], 95.00th=[ 258], 00:11:20.142 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 367], 99.95th=[ 375], 00:11:20.142 | 99.99th=[ 383] 00:11:20.142 bw ( KiB/s): min= 9400, max= 9400, per=25.51%, avg=9400.00, stdev= 0.00, samples=1 00:11:20.142 iops : min= 2350, max= 2350, avg=2350.00, stdev= 0.00, samples=1 00:11:20.142 lat (usec) : 250=81.19%, 500=18.78%, 750=0.02% 00:11:20.142 cpu : usr=2.30%, sys=6.50%, ctx=4302, majf=0, minf=11 00:11:20.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.142 issued rwts: total=2048,2254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.142 00:11:20.142 Run status group 0 (all jobs): 00:11:20.142 READ: bw=33.3MiB/s (34.9MB/s), 8184KiB/s-9558KiB/s (8380kB/s-9788kB/s), io=33.3MiB (35.0MB), run=1001-1001msec 00:11:20.142 WRITE: bw=36.0MiB/s (37.7MB/s), 8707KiB/s-9.99MiB/s (8916kB/s-10.5MB/s), io=36.0MiB (37.8MB), run=1001-1001msec 00:11:20.142 00:11:20.142 Disk stats (read/write): 00:11:20.142 nvme0n1: ios=2098/2203, merge=0/0, ticks=471/371, in_queue=842, util=87.68% 00:11:20.142 nvme0n2: ios=1682/2048, merge=0/0, ticks=457/406, in_queue=863, util=90.27% 00:11:20.142 nvme0n3: ios=1655/2048, merge=0/0, ticks=403/403, in_queue=806, util=89.15% 00:11:20.142 nvme0n4: ios=1684/2048, merge=0/0, ticks=411/413, in_queue=824, util=89.91% 00:11:20.143 04:07:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:20.143 [global] 00:11:20.143 thread=1 00:11:20.143 invalidate=1 00:11:20.143 rw=randwrite 00:11:20.143 time_based=1 00:11:20.143 runtime=1 00:11:20.143 ioengine=libaio 00:11:20.143 direct=1 00:11:20.143 bs=4096 00:11:20.143 iodepth=1 00:11:20.143 norandommap=0 00:11:20.143 numjobs=1 00:11:20.143 00:11:20.143 verify_dump=1 00:11:20.143 verify_backlog=512 00:11:20.143 verify_state_save=0 00:11:20.143 do_verify=1 00:11:20.143 verify=crc32c-intel 00:11:20.143 [job0] 00:11:20.143 filename=/dev/nvme0n1 00:11:20.143 [job1] 00:11:20.143 filename=/dev/nvme0n2 00:11:20.143 [job2] 00:11:20.143 filename=/dev/nvme0n3 00:11:20.143 [job3] 00:11:20.143 filename=/dev/nvme0n4 00:11:20.400 Could not set queue depth (nvme0n1) 00:11:20.400 Could not set queue depth (nvme0n2) 00:11:20.400 Could not set queue depth (nvme0n3) 00:11:20.400 Could not set queue depth (nvme0n4) 00:11:20.400 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.400 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.401 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.401 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.401 fio-3.35 00:11:20.401 Starting 4 threads 00:11:21.775 00:11:21.775 job0: (groupid=0, jobs=1): err= 0: pid=82227: Tue Jul 23 04:07:14 2024 00:11:21.775 read: IOPS=1288, BW=5155KiB/s (5279kB/s)(5160KiB/1001msec) 00:11:21.775 slat (nsec): min=10506, max=73228, avg=16786.87, stdev=6716.47 00:11:21.775 clat (usec): min=238, max=1053, avg=396.33, stdev=63.23 00:11:21.775 lat (usec): min=258, max=1080, avg=413.12, stdev=64.78 00:11:21.775 clat percentiles (usec): 00:11:21.775 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 347], 00:11:21.775 | 30.00th=[ 359], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 400], 00:11:21.775 | 70.00th=[ 416], 80.00th=[ 441], 90.00th=[ 474], 95.00th=[ 510], 00:11:21.775 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 742], 99.95th=[ 1057], 00:11:21.775 | 99.99th=[ 1057] 00:11:21.776 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:21.776 slat (usec): min=14, max=110, avg=25.37, stdev= 7.75 00:11:21.776 clat (usec): min=176, max=566, avg=275.08, stdev=49.26 00:11:21.776 lat (usec): min=201, max=608, avg=300.45, stdev=49.27 00:11:21.776 clat percentiles (usec): 00:11:21.776 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 231], 00:11:21.776 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:11:21.776 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 355], 00:11:21.776 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 553], 99.95th=[ 570], 00:11:21.776 | 99.99th=[ 570] 00:11:21.776 bw ( KiB/s): min= 8008, max= 8008, per=31.68%, avg=8008.00, stdev= 0.00, samples=1 00:11:21.776 iops : min= 2002, max= 2002, avg=2002.00, stdev= 0.00, samples=1 00:11:21.776 lat (usec) : 250=18.01%, 500=79.16%, 750=2.80% 00:11:21.776 lat (msec) : 2=0.04% 00:11:21.776 cpu : usr=1.80%, sys=4.80%, ctx=2830, majf=0, minf=5 00:11:21.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.776 issued rwts: total=1290,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.776 job1: (groupid=0, jobs=1): err= 0: pid=82228: Tue Jul 23 04:07:14 2024 00:11:21.776 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:21.776 slat (nsec): min=11420, max=52307, avg=14514.12, stdev=4690.60 00:11:21.776 clat (usec): min=186, max=644, avg=254.86, stdev=34.42 00:11:21.776 lat (usec): min=199, max=658, avg=269.37, stdev=34.95 00:11:21.776 clat percentiles (usec): 00:11:21.776 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:11:21.776 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:11:21.776 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 310], 00:11:21.776 | 99.00th=[ 355], 99.50th=[ 392], 99.90th=[ 578], 99.95th=[ 594], 00:11:21.776 | 99.99th=[ 644] 00:11:21.776 write: IOPS=2227, BW=8911KiB/s (9125kB/s)(8920KiB/1001msec); 0 zone resets 00:11:21.776 slat (nsec): min=15871, max=99106, avg=20492.98, stdev=6920.86 00:11:21.776 clat (usec): min=117, max=695, avg=177.33, stdev=31.07 00:11:21.776 lat (usec): min=134, max=713, avg=197.83, stdev=32.62 00:11:21.776 clat percentiles (usec): 00:11:21.776 | 1.00th=[ 130], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 155], 00:11:21.776 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:11:21.776 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 212], 95.00th=[ 227], 00:11:21.776 | 99.00th=[ 258], 99.50th=[ 281], 99.90th=[ 363], 99.95th=[ 635], 00:11:21.776 | 99.99th=[ 693] 00:11:21.776 bw ( KiB/s): min= 8936, max= 8936, per=35.35%, avg=8936.00, stdev= 0.00, samples=1 00:11:21.776 iops : min= 2234, max= 2234, avg=2234.00, stdev= 0.00, samples=1 00:11:21.776 lat (usec) : 250=75.76%, 500=24.10%, 750=0.14% 00:11:21.776 cpu : usr=1.40%, sys=6.00%, ctx=4278, majf=0, minf=17 00:11:21.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.776 issued rwts: total=2048,2230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.776 job2: (groupid=0, jobs=1): err= 0: pid=82229: Tue Jul 23 04:07:14 2024 00:11:21.776 read: IOPS=1013, BW=4056KiB/s (4153kB/s)(4060KiB/1001msec) 00:11:21.776 slat (usec): min=14, max=143, avg=32.61, stdev=11.85 00:11:21.776 clat (usec): min=230, max=949, avg=507.72, stdev=127.94 00:11:21.776 lat (usec): min=258, max=989, avg=540.34, stdev=131.91 00:11:21.776 clat percentiles (usec): 00:11:21.776 | 1.00th=[ 293], 5.00th=[ 330], 10.00th=[ 351], 20.00th=[ 379], 00:11:21.776 | 30.00th=[ 412], 40.00th=[ 465], 50.00th=[ 506], 60.00th=[ 545], 00:11:21.776 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 750], 00:11:21.776 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 889], 99.95th=[ 947], 00:11:21.776 | 99.99th=[ 947] 00:11:21.776 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:21.776 slat (usec): min=28, max=136, avg=44.45, stdev=11.05 00:11:21.776 clat (usec): min=135, max=761, avg=387.89, stdev=112.98 00:11:21.776 lat (usec): min=167, max=813, avg=432.34, stdev=115.28 00:11:21.776 clat percentiles (usec): 00:11:21.776 | 1.00th=[ 151], 5.00th=[ 255], 10.00th=[ 273], 20.00th=[ 293], 00:11:21.776 | 30.00th=[ 314], 40.00th=[ 338], 50.00th=[ 371], 60.00th=[ 392], 00:11:21.776 | 70.00th=[ 424], 80.00th=[ 510], 90.00th=[ 562], 95.00th=[ 586], 00:11:21.776 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 742], 99.95th=[ 758], 00:11:21.776 | 99.99th=[ 758] 00:11:21.776 bw ( KiB/s): min= 4096, max= 4096, per=16.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:21.776 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:21.776 lat (usec) : 250=2.40%, 500=60.81%, 750=34.23%, 1000=2.55% 00:11:21.776 cpu : usr=2.10%, sys=6.40%, ctx=2043, majf=0, minf=13 00:11:21.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.776 issued rwts: total=1015,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.776 job3: (groupid=0, jobs=1): err= 0: pid=82230: Tue Jul 23 04:07:14 2024 00:11:21.776 read: IOPS=1288, BW=5155KiB/s (5279kB/s)(5160KiB/1001msec) 00:11:21.776 slat (nsec): min=10811, max=67857, avg=18013.08, stdev=6143.45 00:11:21.776 clat (usec): min=250, max=949, avg=394.96, stdev=65.36 00:11:21.776 lat (usec): min=262, max=962, avg=412.97, stdev=65.20 00:11:21.776 clat percentiles (usec): 00:11:21.776 | 1.00th=[ 285], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 343], 00:11:21.776 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 400], 00:11:21.776 | 70.00th=[ 416], 80.00th=[ 437], 90.00th=[ 482], 95.00th=[ 510], 00:11:21.776 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 799], 99.95th=[ 947], 00:11:21.776 | 99.99th=[ 947] 00:11:21.776 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:21.776 slat (nsec): min=13611, max=96684, avg=25597.49, stdev=8370.59 00:11:21.776 clat (usec): min=168, max=567, avg=274.90, stdev=47.63 00:11:21.776 lat (usec): min=197, max=588, avg=300.50, stdev=48.55 00:11:21.776 clat percentiles (usec): 00:11:21.776 | 1.00th=[ 194], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 233], 00:11:21.776 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:11:21.776 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 347], 00:11:21.776 | 99.00th=[ 441], 99.50th=[ 469], 99.90th=[ 553], 99.95th=[ 570], 00:11:21.776 | 99.99th=[ 570] 00:11:21.776 bw ( KiB/s): min= 8000, max= 8000, per=31.65%, avg=8000.00, stdev= 0.00, samples=1 00:11:21.776 iops : min= 2000, max= 2000, avg=2000.00, stdev= 0.00, samples=1 00:11:21.776 lat (usec) : 250=17.55%, 500=79.26%, 750=3.08%, 1000=0.11% 00:11:21.776 cpu : usr=1.90%, sys=4.90%, ctx=2833, majf=0, minf=10 00:11:21.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.776 issued rwts: total=1290,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.776 00:11:21.776 Run status group 0 (all jobs): 00:11:21.776 READ: bw=22.0MiB/s (23.1MB/s), 4056KiB/s-8184KiB/s (4153kB/s-8380kB/s), io=22.0MiB (23.1MB), run=1001-1001msec 00:11:21.776 WRITE: bw=24.7MiB/s (25.9MB/s), 4092KiB/s-8911KiB/s (4190kB/s-9125kB/s), io=24.7MiB (25.9MB), run=1001-1001msec 00:11:21.776 00:11:21.776 Disk stats (read/write): 00:11:21.776 nvme0n1: ios=1074/1484, merge=0/0, ticks=400/372, in_queue=772, util=87.98% 00:11:21.776 nvme0n2: ios=1727/2048, merge=0/0, ticks=491/390, in_queue=881, util=89.47% 00:11:21.776 nvme0n3: ios=768/1024, merge=0/0, ticks=410/423, in_queue=833, util=89.38% 00:11:21.776 nvme0n4: ios=1024/1485, merge=0/0, ticks=370/374, in_queue=744, util=89.74% 00:11:21.776 04:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:21.776 [global] 00:11:21.776 thread=1 00:11:21.776 invalidate=1 00:11:21.776 rw=write 00:11:21.776 time_based=1 00:11:21.776 runtime=1 00:11:21.776 ioengine=libaio 00:11:21.776 direct=1 00:11:21.776 bs=4096 00:11:21.776 iodepth=128 00:11:21.776 norandommap=0 00:11:21.776 numjobs=1 00:11:21.776 00:11:21.776 verify_dump=1 00:11:21.776 verify_backlog=512 00:11:21.776 verify_state_save=0 00:11:21.776 do_verify=1 00:11:21.776 verify=crc32c-intel 00:11:21.776 [job0] 00:11:21.776 filename=/dev/nvme0n1 00:11:21.776 [job1] 00:11:21.776 filename=/dev/nvme0n2 00:11:21.776 [job2] 00:11:21.776 filename=/dev/nvme0n3 00:11:21.776 [job3] 00:11:21.776 filename=/dev/nvme0n4 00:11:21.776 Could not set queue depth (nvme0n1) 00:11:21.776 Could not set queue depth (nvme0n2) 00:11:21.776 Could not set queue depth (nvme0n3) 00:11:21.776 Could not set queue depth (nvme0n4) 00:11:21.776 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.776 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.776 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.776 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.776 fio-3.35 00:11:21.776 Starting 4 threads 00:11:23.152 00:11:23.152 job0: (groupid=0, jobs=1): err= 0: pid=82289: Tue Jul 23 04:07:16 2024 00:11:23.152 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:11:23.152 slat (usec): min=5, max=7560, avg=185.64, stdev=957.85 00:11:23.152 clat (usec): min=16900, max=28929, avg=24303.12, stdev=1656.03 00:11:23.152 lat (usec): min=22354, max=29010, avg=24488.76, stdev=1361.69 00:11:23.152 clat percentiles (usec): 00:11:23.152 | 1.00th=[18482], 5.00th=[22414], 10.00th=[22676], 20.00th=[23462], 00:11:23.152 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24511], 00:11:23.152 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26870], 95.00th=[27395], 00:11:23.152 | 99.00th=[28705], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:11:23.152 | 99.99th=[28967] 00:11:23.152 write: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1004msec); 0 zone resets 00:11:23.152 slat (usec): min=12, max=7469, avg=188.16, stdev=930.02 00:11:23.152 clat (usec): min=2551, max=27262, avg=23771.91, stdev=2707.01 00:11:23.152 lat (usec): min=7135, max=27402, avg=23960.07, stdev=2548.97 00:11:23.152 clat percentiles (usec): 00:11:23.152 | 1.00th=[ 7832], 5.00th=[19268], 10.00th=[22414], 20.00th=[22938], 00:11:23.152 | 30.00th=[23462], 40.00th=[23725], 50.00th=[24249], 60.00th=[24511], 00:11:23.152 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26084], 95.00th=[26608], 00:11:23.152 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:11:23.152 | 99.99th=[27132] 00:11:23.152 bw ( KiB/s): min= 8720, max=11776, per=27.14%, avg=10248.00, stdev=2160.92, samples=2 00:11:23.152 iops : min= 2180, max= 2944, avg=2562.00, stdev=540.23, samples=2 00:11:23.152 lat (msec) : 4=0.02%, 10=0.61%, 20=3.73%, 50=95.64% 00:11:23.152 cpu : usr=3.09%, sys=6.38%, ctx=178, majf=0, minf=13 00:11:23.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:23.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.152 issued rwts: total=2560,2689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.152 job1: (groupid=0, jobs=1): err= 0: pid=82290: Tue Jul 23 04:07:16 2024 00:11:23.152 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:11:23.152 slat (usec): min=5, max=7457, avg=185.58, stdev=956.40 00:11:23.152 clat (usec): min=17016, max=28814, avg=24259.03, stdev=1624.49 00:11:23.152 lat (usec): min=22338, max=28827, avg=24444.61, stdev=1326.10 00:11:23.152 clat percentiles (usec): 00:11:23.152 | 1.00th=[18482], 5.00th=[22414], 10.00th=[22676], 20.00th=[23462], 00:11:23.152 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:11:23.152 | 70.00th=[24511], 80.00th=[25297], 90.00th=[26608], 95.00th=[27132], 00:11:23.152 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:11:23.152 | 99.99th=[28705] 00:11:23.152 write: IOPS=2715, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1002msec); 0 zone resets 00:11:23.152 slat (usec): min=11, max=7082, avg=185.90, stdev=914.41 00:11:23.152 clat (usec): min=272, max=27166, avg=23498.06, stdev=3342.49 00:11:23.152 lat (usec): min=2840, max=27191, avg=23683.96, stdev=3224.99 00:11:23.152 clat percentiles (usec): 00:11:23.152 | 1.00th=[ 3490], 5.00th=[19006], 10.00th=[22676], 20.00th=[22938], 00:11:23.152 | 30.00th=[23462], 40.00th=[23462], 50.00th=[24249], 60.00th=[24249], 00:11:23.152 | 70.00th=[24511], 80.00th=[25297], 90.00th=[25822], 95.00th=[26608], 00:11:23.152 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:11:23.152 | 99.99th=[27132] 00:11:23.152 bw ( KiB/s): min=11760, max=11760, per=31.14%, avg=11760.00, stdev= 0.00, samples=1 00:11:23.152 iops : min= 2940, max= 2940, avg=2940.00, stdev= 0.00, samples=1 00:11:23.152 lat (usec) : 500=0.02% 00:11:23.152 lat (msec) : 4=0.61%, 10=0.61%, 20=3.45%, 50=95.32% 00:11:23.152 cpu : usr=2.60%, sys=7.59%, ctx=172, majf=0, minf=13 00:11:23.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:23.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.152 issued rwts: total=2560,2721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.152 job2: (groupid=0, jobs=1): err= 0: pid=82291: Tue Jul 23 04:07:16 2024 00:11:23.152 read: IOPS=2488, BW=9954KiB/s (10.2MB/s)(9.79MiB/1007msec) 00:11:23.152 slat (usec): min=5, max=30438, avg=216.14, stdev=1455.91 00:11:23.152 clat (usec): min=2024, max=61810, avg=28141.52, stdev=8109.97 00:11:23.152 lat (usec): min=9447, max=61822, avg=28357.66, stdev=8174.34 00:11:23.152 clat percentiles (usec): 00:11:23.152 | 1.00th=[10028], 5.00th=[18220], 10.00th=[21365], 20.00th=[22676], 00:11:23.152 | 30.00th=[22938], 40.00th=[23725], 50.00th=[27919], 60.00th=[30802], 00:11:23.152 | 70.00th=[31327], 80.00th=[32113], 90.00th=[38011], 95.00th=[43254], 00:11:23.152 | 99.00th=[55837], 99.50th=[59507], 99.90th=[61604], 99.95th=[61604], 00:11:23.152 | 99.99th=[61604] 00:11:23.152 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:11:23.152 slat (usec): min=5, max=18327, avg=172.38, stdev=1020.52 00:11:23.152 clat (usec): min=6279, max=61767, avg=22270.12, stdev=6727.88 00:11:23.152 lat (usec): min=6297, max=61774, avg=22442.50, stdev=6705.82 00:11:23.152 clat percentiles (usec): 00:11:23.152 | 1.00th=[ 9241], 5.00th=[15401], 10.00th=[16057], 20.00th=[17695], 00:11:23.152 | 30.00th=[17957], 40.00th=[18744], 50.00th=[20317], 60.00th=[21103], 00:11:23.152 | 70.00th=[24511], 80.00th=[26608], 90.00th=[34341], 95.00th=[35914], 00:11:23.152 | 99.00th=[36963], 99.50th=[36963], 99.90th=[43254], 99.95th=[50070], 00:11:23.152 | 99.99th=[61604] 00:11:23.152 bw ( KiB/s): min= 8192, max=12288, per=27.12%, avg=10240.00, stdev=2896.31, samples=2 00:11:23.152 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:23.152 lat (msec) : 4=0.02%, 10=1.20%, 20=24.30%, 50=73.17%, 100=1.30% 00:11:23.153 cpu : usr=2.39%, sys=7.16%, ctx=163, majf=0, minf=11 00:11:23.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:23.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.153 issued rwts: total=2506,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.153 job3: (groupid=0, jobs=1): err= 0: pid=82292: Tue Jul 23 04:07:16 2024 00:11:23.153 read: IOPS=1436, BW=5746KiB/s (5883kB/s)(5780KiB/1006msec) 00:11:23.153 slat (usec): min=5, max=20983, avg=308.82, stdev=1499.89 00:11:23.153 clat (usec): min=1075, max=97858, avg=35184.49, stdev=12744.29 00:11:23.153 lat (usec): min=9449, max=97893, avg=35493.32, stdev=12909.06 00:11:23.153 clat percentiles (usec): 00:11:23.153 | 1.00th=[ 9765], 5.00th=[20317], 10.00th=[27395], 20.00th=[30016], 00:11:23.153 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31589], 60.00th=[32113], 00:11:23.153 | 70.00th=[32637], 80.00th=[39060], 90.00th=[57934], 95.00th=[66323], 00:11:23.153 | 99.00th=[82314], 99.50th=[89654], 99.90th=[89654], 99.95th=[98042], 00:11:23.153 | 99.99th=[98042] 00:11:23.153 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:11:23.153 slat (usec): min=14, max=19655, avg=353.30, stdev=1464.97 00:11:23.153 clat (msec): min=14, max=108, avg=48.41, stdev=23.21 00:11:23.153 lat (msec): min=14, max=108, avg=48.76, stdev=23.33 00:11:23.153 clat percentiles (msec): 00:11:23.153 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 34], 00:11:23.153 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 38], 60.00th=[ 40], 00:11:23.153 | 70.00th=[ 58], 80.00th=[ 68], 90.00th=[ 92], 95.00th=[ 96], 00:11:23.153 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 00:11:23.153 | 99.99th=[ 109] 00:11:23.153 bw ( KiB/s): min= 6032, max= 6256, per=16.27%, avg=6144.00, stdev=158.39, samples=2 00:11:23.153 iops : min= 1508, max= 1564, avg=1536.00, stdev=39.60, samples=2 00:11:23.153 lat (msec) : 2=0.03%, 10=0.67%, 20=3.05%, 50=73.10%, 100=21.67% 00:11:23.153 lat (msec) : 250=1.48% 00:11:23.153 cpu : usr=1.49%, sys=5.67%, ctx=208, majf=0, minf=13 00:11:23.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:11:23.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.153 issued rwts: total=1445,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.153 00:11:23.153 Run status group 0 (all jobs): 00:11:23.153 READ: bw=35.2MiB/s (36.9MB/s), 5746KiB/s-9.98MiB/s (5883kB/s-10.5MB/s), io=35.4MiB (37.2MB), run=1002-1007msec 00:11:23.153 WRITE: bw=36.9MiB/s (38.7MB/s), 6107KiB/s-10.6MiB/s (6254kB/s-11.1MB/s), io=37.1MiB (38.9MB), run=1002-1007msec 00:11:23.153 00:11:23.153 Disk stats (read/write): 00:11:23.153 nvme0n1: ios=2098/2464, merge=0/0, ticks=11031/12408, in_queue=23439, util=88.57% 00:11:23.153 nvme0n2: ios=2097/2464, merge=0/0, ticks=11686/12886, in_queue=24572, util=89.17% 00:11:23.153 nvme0n3: ios=2048/2407, merge=0/0, ticks=53813/49145, in_queue=102958, util=89.08% 00:11:23.153 nvme0n4: ios=1024/1399, merge=0/0, ticks=13319/21789, in_queue=35108, util=89.64% 00:11:23.153 04:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:23.153 [global] 00:11:23.153 thread=1 00:11:23.153 invalidate=1 00:11:23.153 rw=randwrite 00:11:23.153 time_based=1 00:11:23.153 runtime=1 00:11:23.153 ioengine=libaio 00:11:23.153 direct=1 00:11:23.153 bs=4096 00:11:23.153 iodepth=128 00:11:23.153 norandommap=0 00:11:23.153 numjobs=1 00:11:23.153 00:11:23.153 verify_dump=1 00:11:23.153 verify_backlog=512 00:11:23.153 verify_state_save=0 00:11:23.153 do_verify=1 00:11:23.153 verify=crc32c-intel 00:11:23.153 [job0] 00:11:23.153 filename=/dev/nvme0n1 00:11:23.153 [job1] 00:11:23.153 filename=/dev/nvme0n2 00:11:23.153 [job2] 00:11:23.153 filename=/dev/nvme0n3 00:11:23.153 [job3] 00:11:23.153 filename=/dev/nvme0n4 00:11:23.153 Could not set queue depth (nvme0n1) 00:11:23.153 Could not set queue depth (nvme0n2) 00:11:23.153 Could not set queue depth (nvme0n3) 00:11:23.153 Could not set queue depth (nvme0n4) 00:11:23.153 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.153 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.153 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.153 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.153 fio-3.35 00:11:23.153 Starting 4 threads 00:11:24.528 00:11:24.528 job0: (groupid=0, jobs=1): err= 0: pid=82345: Tue Jul 23 04:07:17 2024 00:11:24.528 read: IOPS=2989, BW=11.7MiB/s (12.2MB/s)(11.8MiB/1010msec) 00:11:24.528 slat (usec): min=10, max=11445, avg=160.95, stdev=1071.60 00:11:24.528 clat (usec): min=1222, max=33833, avg=21789.64, stdev=2954.68 00:11:24.528 lat (usec): min=9311, max=41641, avg=21950.60, stdev=2986.29 00:11:24.528 clat percentiles (usec): 00:11:24.528 | 1.00th=[ 9896], 5.00th=[14091], 10.00th=[20841], 20.00th=[21627], 00:11:24.528 | 30.00th=[21890], 40.00th=[21890], 50.00th=[22152], 60.00th=[22414], 00:11:24.528 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23462], 95.00th=[23987], 00:11:24.528 | 99.00th=[33162], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:11:24.528 | 99.99th=[33817] 00:11:24.528 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:11:24.528 slat (usec): min=6, max=16562, avg=160.83, stdev=1037.99 00:11:24.528 clat (usec): min=10515, max=28953, avg=20213.81, stdev=2092.44 00:11:24.528 lat (usec): min=13878, max=28980, avg=20374.63, stdev=1879.33 00:11:24.528 clat percentiles (usec): 00:11:24.528 | 1.00th=[12518], 5.00th=[18220], 10.00th=[18482], 20.00th=[19006], 00:11:24.528 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:11:24.528 | 70.00th=[20841], 80.00th=[20841], 90.00th=[21627], 95.00th=[23462], 00:11:24.528 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:11:24.528 | 99.99th=[28967] 00:11:24.528 bw ( KiB/s): min=12288, max=12288, per=20.76%, avg=12288.00, stdev= 0.00, samples=2 00:11:24.528 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:24.528 lat (msec) : 2=0.02%, 10=0.53%, 20=23.69%, 50=75.77% 00:11:24.528 cpu : usr=2.58%, sys=10.21%, ctx=132, majf=0, minf=9 00:11:24.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:24.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.528 issued rwts: total=3019,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.528 job1: (groupid=0, jobs=1): err= 0: pid=82346: Tue Jul 23 04:07:17 2024 00:11:24.528 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:11:24.528 slat (usec): min=5, max=7652, avg=120.53, stdev=589.86 00:11:24.528 clat (usec): min=9246, max=24933, avg=15661.80, stdev=1746.42 00:11:24.528 lat (usec): min=9919, max=24971, avg=15782.33, stdev=1773.92 00:11:24.528 clat percentiles (usec): 00:11:24.528 | 1.00th=[11207], 5.00th=[13042], 10.00th=[13566], 20.00th=[14353], 00:11:24.528 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15664], 60.00th=[15926], 00:11:24.528 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17695], 95.00th=[19006], 00:11:24.528 | 99.00th=[20055], 99.50th=[20317], 99.90th=[22414], 99.95th=[22938], 00:11:24.528 | 99.99th=[25035] 00:11:24.528 write: IOPS=4166, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1006msec); 0 zone resets 00:11:24.528 slat (usec): min=12, max=7469, avg=113.56, stdev=624.89 00:11:24.528 clat (usec): min=512, max=23750, avg=14998.00, stdev=1938.14 00:11:24.528 lat (usec): min=5676, max=23799, avg=15111.56, stdev=2008.08 00:11:24.528 clat percentiles (usec): 00:11:24.528 | 1.00th=[ 6915], 5.00th=[12387], 10.00th=[13173], 20.00th=[13960], 00:11:24.528 | 30.00th=[14353], 40.00th=[14746], 50.00th=[14877], 60.00th=[15270], 00:11:24.528 | 70.00th=[15795], 80.00th=[16319], 90.00th=[16909], 95.00th=[17957], 00:11:24.528 | 99.00th=[20055], 99.50th=[21103], 99.90th=[22676], 99.95th=[22676], 00:11:24.528 | 99.99th=[23725] 00:11:24.528 bw ( KiB/s): min=16384, max=16416, per=27.70%, avg=16400.00, stdev=22.63, samples=2 00:11:24.528 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:11:24.528 lat (usec) : 750=0.01% 00:11:24.528 lat (msec) : 10=1.23%, 20=97.67%, 50=1.09% 00:11:24.528 cpu : usr=3.98%, sys=12.54%, ctx=341, majf=0, minf=13 00:11:24.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:24.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.528 issued rwts: total=4096,4191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.528 job2: (groupid=0, jobs=1): err= 0: pid=82348: Tue Jul 23 04:07:17 2024 00:11:24.528 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:24.529 slat (usec): min=7, max=7465, avg=133.75, stdev=654.61 00:11:24.529 clat (usec): min=12610, max=22756, avg=17646.84, stdev=1320.99 00:11:24.529 lat (usec): min=15813, max=22770, avg=17780.58, stdev=1162.22 00:11:24.529 clat percentiles (usec): 00:11:24.529 | 1.00th=[13566], 5.00th=[16319], 10.00th=[16581], 20.00th=[16909], 00:11:24.529 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:11:24.529 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19530], 95.00th=[20055], 00:11:24.529 | 99.00th=[21365], 99.50th=[22676], 99.90th=[22676], 99.95th=[22676], 00:11:24.529 | 99.99th=[22676] 00:11:24.529 write: IOPS=3765, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1003msec); 0 zone resets 00:11:24.529 slat (usec): min=13, max=4447, avg=129.77, stdev=582.62 00:11:24.529 clat (usec): min=285, max=19471, avg=16713.29, stdev=1823.71 00:11:24.529 lat (usec): min=3444, max=19503, avg=16843.06, stdev=1728.96 00:11:24.529 clat percentiles (usec): 00:11:24.529 | 1.00th=[ 7635], 5.00th=[14091], 10.00th=[15926], 20.00th=[16188], 00:11:24.529 | 30.00th=[16450], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:11:24.529 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18482], 00:11:24.529 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:11:24.529 | 99.99th=[19530] 00:11:24.529 bw ( KiB/s): min=13568, max=15655, per=24.68%, avg=14611.50, stdev=1475.73, samples=2 00:11:24.529 iops : min= 3392, max= 3913, avg=3652.50, stdev=368.40, samples=2 00:11:24.529 lat (usec) : 500=0.01% 00:11:24.529 lat (msec) : 4=0.26%, 10=0.61%, 20=96.92%, 50=2.20% 00:11:24.529 cpu : usr=3.79%, sys=11.58%, ctx=231, majf=0, minf=15 00:11:24.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:24.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.529 issued rwts: total=3584,3777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.529 job3: (groupid=0, jobs=1): err= 0: pid=82350: Tue Jul 23 04:07:17 2024 00:11:24.529 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:24.529 slat (usec): min=8, max=4834, avg=133.03, stdev=541.91 00:11:24.529 clat (usec): min=12675, max=23332, avg=17410.49, stdev=1555.24 00:11:24.529 lat (usec): min=12698, max=23368, avg=17543.51, stdev=1612.09 00:11:24.529 clat percentiles (usec): 00:11:24.529 | 1.00th=[13435], 5.00th=[14746], 10.00th=[15795], 20.00th=[16319], 00:11:24.529 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17171], 60.00th=[17695], 00:11:24.529 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19268], 95.00th=[20579], 00:11:24.529 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22938], 99.95th=[22938], 00:11:24.529 | 99.99th=[23462] 00:11:24.529 write: IOPS=3896, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1003msec); 0 zone resets 00:11:24.529 slat (usec): min=12, max=5432, avg=125.37, stdev=603.44 00:11:24.529 clat (usec): min=2507, max=22952, avg=16395.17, stdev=2169.49 00:11:24.529 lat (usec): min=2530, max=23074, avg=16520.54, stdev=2237.94 00:11:24.529 clat percentiles (usec): 00:11:24.529 | 1.00th=[ 7046], 5.00th=[13304], 10.00th=[14615], 20.00th=[15139], 00:11:24.529 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16581], 60.00th=[16909], 00:11:24.529 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[19268], 00:11:24.529 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22938], 99.95th=[22938], 00:11:24.529 | 99.99th=[22938] 00:11:24.529 bw ( KiB/s): min=14200, max=16064, per=25.56%, avg=15132.00, stdev=1318.05, samples=2 00:11:24.529 iops : min= 3550, max= 4016, avg=3783.00, stdev=329.51, samples=2 00:11:24.529 lat (msec) : 4=0.32%, 10=0.56%, 20=94.05%, 50=5.07% 00:11:24.529 cpu : usr=3.39%, sys=12.38%, ctx=315, majf=0, minf=13 00:11:24.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:24.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.529 issued rwts: total=3584,3908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.529 00:11:24.529 Run status group 0 (all jobs): 00:11:24.529 READ: bw=55.2MiB/s (57.9MB/s), 11.7MiB/s-15.9MiB/s (12.2MB/s-16.7MB/s), io=55.8MiB (58.5MB), run=1003-1010msec 00:11:24.529 WRITE: bw=57.8MiB/s (60.6MB/s), 11.9MiB/s-16.3MiB/s (12.5MB/s-17.1MB/s), io=58.4MiB (61.2MB), run=1003-1010msec 00:11:24.529 00:11:24.529 Disk stats (read/write): 00:11:24.529 nvme0n1: ios=2602/2568, merge=0/0, ticks=54141/49105, in_queue=103246, util=88.68% 00:11:24.529 nvme0n2: ios=3546/3584, merge=0/0, ticks=26166/23573, in_queue=49739, util=89.57% 00:11:24.529 nvme0n3: ios=3078/3264, merge=0/0, ticks=12316/12334, in_queue=24650, util=89.18% 00:11:24.529 nvme0n4: ios=3072/3368, merge=0/0, ticks=16758/16184, in_queue=32942, util=89.73% 00:11:24.529 04:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:24.529 04:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=82367 00:11:24.529 04:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:24.529 04:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:24.529 [global] 00:11:24.529 thread=1 00:11:24.529 invalidate=1 00:11:24.529 rw=read 00:11:24.529 time_based=1 00:11:24.529 runtime=10 00:11:24.529 ioengine=libaio 00:11:24.529 direct=1 00:11:24.529 bs=4096 00:11:24.529 iodepth=1 00:11:24.529 norandommap=1 00:11:24.529 numjobs=1 00:11:24.529 00:11:24.529 [job0] 00:11:24.529 filename=/dev/nvme0n1 00:11:24.529 [job1] 00:11:24.529 filename=/dev/nvme0n2 00:11:24.529 [job2] 00:11:24.529 filename=/dev/nvme0n3 00:11:24.529 [job3] 00:11:24.529 filename=/dev/nvme0n4 00:11:24.529 Could not set queue depth (nvme0n1) 00:11:24.529 Could not set queue depth (nvme0n2) 00:11:24.529 Could not set queue depth (nvme0n3) 00:11:24.529 Could not set queue depth (nvme0n4) 00:11:24.529 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.529 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.529 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.529 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.529 fio-3.35 00:11:24.529 Starting 4 threads 00:11:27.812 04:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:27.812 fio: pid=82410, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:27.812 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=53084160, buflen=4096 00:11:27.812 04:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:28.070 fio: pid=82409, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:28.070 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=43339776, buflen=4096 00:11:28.070 04:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.070 04:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:28.328 fio: pid=82407, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:28.328 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=46931968, buflen=4096 00:11:28.328 04:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.328 04:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:28.328 fio: pid=82408, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:28.328 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55590912, buflen=4096 00:11:28.586 00:11:28.586 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=82407: Tue Jul 23 04:07:21 2024 00:11:28.587 read: IOPS=3296, BW=12.9MiB/s (13.5MB/s)(44.8MiB/3476msec) 00:11:28.587 slat (usec): min=10, max=15813, avg=21.14, stdev=236.20 00:11:28.587 clat (usec): min=139, max=2643, avg=280.60, stdev=67.82 00:11:28.587 lat (usec): min=153, max=16140, avg=301.73, stdev=246.53 00:11:28.587 clat percentiles (usec): 00:11:28.587 | 1.00th=[ 167], 5.00th=[ 200], 10.00th=[ 223], 20.00th=[ 239], 00:11:28.587 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:11:28.587 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 347], 95.00th=[ 371], 00:11:28.587 | 99.00th=[ 445], 99.50th=[ 498], 99.90th=[ 988], 99.95th=[ 1205], 00:11:28.587 | 99.99th=[ 1876] 00:11:28.587 bw ( KiB/s): min=12680, max=13624, per=25.08%, avg=13082.67, stdev=406.28, samples=6 00:11:28.587 iops : min= 3170, max= 3406, avg=3270.67, stdev=101.57, samples=6 00:11:28.587 lat (usec) : 250=29.72%, 500=69.78%, 750=0.35%, 1000=0.05% 00:11:28.587 lat (msec) : 2=0.08%, 4=0.01% 00:11:28.587 cpu : usr=1.32%, sys=4.81%, ctx=11472, majf=0, minf=1 00:11:28.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.587 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.587 issued rwts: total=11459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.587 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=82408: Tue Jul 23 04:07:21 2024 00:11:28.587 read: IOPS=3643, BW=14.2MiB/s (14.9MB/s)(53.0MiB/3725msec) 00:11:28.587 slat (usec): min=10, max=9832, avg=17.62, stdev=165.30 00:11:28.587 clat (usec): min=152, max=2933, avg=255.27, stdev=59.80 00:11:28.587 lat (usec): min=163, max=10016, avg=272.89, stdev=175.53 00:11:28.587 clat percentiles (usec): 00:11:28.587 | 1.00th=[ 178], 5.00th=[ 202], 10.00th=[ 215], 20.00th=[ 227], 00:11:28.587 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 260], 00:11:28.587 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:11:28.587 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 783], 99.95th=[ 1680], 00:11:28.587 | 99.99th=[ 2442] 00:11:28.587 bw ( KiB/s): min=14056, max=14648, per=27.75%, avg=14474.29, stdev=201.26, samples=7 00:11:28.587 iops : min= 3514, max= 3662, avg=3618.57, stdev=50.32, samples=7 00:11:28.587 lat (usec) : 250=48.97%, 500=50.79%, 750=0.12%, 1000=0.04% 00:11:28.587 lat (msec) : 2=0.04%, 4=0.03% 00:11:28.587 cpu : usr=0.99%, sys=4.78%, ctx=13581, majf=0, minf=1 00:11:28.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.587 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.587 issued rwts: total=13573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.587 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=82409: Tue Jul 23 04:07:21 2024 00:11:28.587 read: IOPS=3262, BW=12.7MiB/s (13.4MB/s)(41.3MiB/3244msec) 00:11:28.587 slat (usec): min=11, max=7293, avg=17.56, stdev=95.93 00:11:28.587 clat (usec): min=156, max=2328, avg=287.44, stdev=64.53 00:11:28.587 lat (usec): min=171, max=7634, avg=305.00, stdev=117.24 00:11:28.587 clat percentiles (usec): 00:11:28.587 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 247], 00:11:28.587 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:11:28.587 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 351], 95.00th=[ 371], 00:11:28.587 | 99.00th=[ 453], 99.50th=[ 498], 99.90th=[ 930], 99.95th=[ 1516], 00:11:28.587 | 99.99th=[ 2089] 00:11:28.587 bw ( KiB/s): min=12576, max=13624, per=25.20%, avg=13145.33, stdev=465.91, samples=6 00:11:28.587 iops : min= 3144, max= 3406, avg=3286.33, stdev=116.48, samples=6 00:11:28.587 lat (usec) : 250=23.44%, 500=76.09%, 750=0.34%, 1000=0.04% 00:11:28.587 lat (msec) : 2=0.07%, 4=0.02% 00:11:28.587 cpu : usr=1.33%, sys=4.41%, ctx=10587, majf=0, minf=1 00:11:28.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.587 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.587 issued rwts: total=10582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.587 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=82410: Tue Jul 23 04:07:21 2024 00:11:28.587 read: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(50.6MiB/2942msec) 00:11:28.587 slat (nsec): min=11353, max=82124, avg=14791.27, stdev=5057.12 00:11:28.587 clat (usec): min=131, max=1234, avg=210.57, stdev=35.17 00:11:28.587 lat (usec): min=154, max=1251, avg=225.36, stdev=35.65 00:11:28.587 clat percentiles (usec): 00:11:28.587 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 184], 00:11:28.587 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 215], 00:11:28.587 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 273], 00:11:28.587 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 367], 99.95th=[ 383], 00:11:28.587 | 99.99th=[ 1012] 00:11:28.587 bw ( KiB/s): min=17016, max=18240, per=33.75%, avg=17604.80, stdev=564.46, samples=5 00:11:28.587 iops : min= 4254, max= 4560, avg=4401.20, stdev=141.11, samples=5 00:11:28.587 lat (usec) : 250=87.35%, 500=12.62% 00:11:28.587 lat (msec) : 2=0.02% 00:11:28.587 cpu : usr=1.29%, sys=5.81%, ctx=12961, majf=0, minf=1 00:11:28.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.587 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.587 issued rwts: total=12961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.587 00:11:28.587 Run status group 0 (all jobs): 00:11:28.587 READ: bw=50.9MiB/s (53.4MB/s), 12.7MiB/s-17.2MiB/s (13.4MB/s-18.0MB/s), io=190MiB (199MB), run=2942-3725msec 00:11:28.587 00:11:28.587 Disk stats (read/write): 00:11:28.587 nvme0n1: ios=10976/0, merge=0/0, ticks=3194/0, in_queue=3194, util=95.08% 00:11:28.587 nvme0n2: ios=13076/0, merge=0/0, ticks=3432/0, in_queue=3432, util=95.69% 00:11:28.587 nvme0n3: ios=10195/0, merge=0/0, ticks=2984/0, in_queue=2984, util=96.46% 00:11:28.587 nvme0n4: ios=12656/0, merge=0/0, ticks=2730/0, in_queue=2730, util=96.79% 00:11:28.587 04:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.587 04:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:28.845 04:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.845 04:07:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:29.104 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.104 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:29.362 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.362 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:29.621 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.621 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:29.621 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:29.621 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 82367 00:11:29.621 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:29.621 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.879 nvmf hotplug test: fio failed as expected 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:29.879 04:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.879 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.879 rmmod nvme_tcp 00:11:30.138 rmmod nvme_fabrics 00:11:30.138 rmmod nvme_keyring 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 81981 ']' 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 81981 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 81981 ']' 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 81981 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81981 00:11:30.138 killing process with pid 81981 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81981' 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 81981 00:11:30.138 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 81981 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:30.396 ************************************ 00:11:30.396 END TEST nvmf_fio_target 00:11:30.396 ************************************ 00:11:30.396 00:11:30.396 real 0m19.328s 00:11:30.396 user 1m12.973s 00:11:30.396 sys 0m9.420s 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.396 ************************************ 00:11:30.396 START TEST nvmf_bdevio 00:11:30.396 ************************************ 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:30.396 * Looking for test storage... 00:11:30.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.396 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.397 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:30.655 Cannot find device "nvmf_tgt_br" 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:30.655 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:30.655 Cannot find device "nvmf_tgt_br2" 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:30.656 Cannot find device "nvmf_tgt_br" 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:30.656 Cannot find device "nvmf_tgt_br2" 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:30.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:30.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:30.656 04:07:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:30.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:30.914 00:11:30.914 --- 10.0.0.2 ping statistics --- 00:11:30.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.914 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:30.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:30.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:30.914 00:11:30.914 --- 10.0.0.3 ping statistics --- 00:11:30.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.914 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:30.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:30.914 00:11:30.914 --- 10.0.0.1 ping statistics --- 00:11:30.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.914 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=82674 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 82674 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 82674 ']' 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.914 04:07:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.914 [2024-07-23 04:07:24.141604] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:11:30.914 [2024-07-23 04:07:24.141706] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.173 [2024-07-23 04:07:24.265622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:31.173 [2024-07-23 04:07:24.281047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.173 [2024-07-23 04:07:24.341933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.173 [2024-07-23 04:07:24.342258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.173 [2024-07-23 04:07:24.342430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.173 [2024-07-23 04:07:24.342560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.173 [2024-07-23 04:07:24.342607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.173 [2024-07-23 04:07:24.342860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:31.173 [2024-07-23 04:07:24.343076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:31.173 [2024-07-23 04:07:24.343387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:31.173 [2024-07-23 04:07:24.343396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.173 [2024-07-23 04:07:24.398023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:31.740 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.740 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:31.740 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.740 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.740 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.998 [2024-07-23 04:07:25.113184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.998 Malloc0 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.998 [2024-07-23 04:07:25.194067] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:31.998 { 00:11:31.998 "params": { 00:11:31.998 "name": "Nvme$subsystem", 00:11:31.998 "trtype": "$TEST_TRANSPORT", 00:11:31.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.998 "adrfam": "ipv4", 00:11:31.998 "trsvcid": "$NVMF_PORT", 00:11:31.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.998 "hdgst": ${hdgst:-false}, 00:11:31.998 "ddgst": ${ddgst:-false} 00:11:31.998 }, 00:11:31.998 "method": "bdev_nvme_attach_controller" 00:11:31.998 } 00:11:31.998 EOF 00:11:31.998 )") 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:31.998 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:31.998 "params": { 00:11:31.998 "name": "Nvme1", 00:11:31.998 "trtype": "tcp", 00:11:31.998 "traddr": "10.0.0.2", 00:11:31.998 "adrfam": "ipv4", 00:11:31.998 "trsvcid": "4420", 00:11:31.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:31.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:31.998 "hdgst": false, 00:11:31.998 "ddgst": false 00:11:31.998 }, 00:11:31.998 "method": "bdev_nvme_attach_controller" 00:11:31.998 }' 00:11:31.998 [2024-07-23 04:07:25.245104] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:11:31.998 [2024-07-23 04:07:25.245163] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82710 ] 00:11:32.256 [2024-07-23 04:07:25.362181] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:32.256 [2024-07-23 04:07:25.377973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.256 [2024-07-23 04:07:25.440349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.256 [2024-07-23 04:07:25.440487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.256 [2024-07-23 04:07:25.440497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.256 [2024-07-23 04:07:25.503734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:32.515 I/O targets: 00:11:32.515 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:32.515 00:11:32.515 00:11:32.515 CUnit - A unit testing framework for C - Version 2.1-3 00:11:32.515 http://cunit.sourceforge.net/ 00:11:32.515 00:11:32.515 00:11:32.515 Suite: bdevio tests on: Nvme1n1 00:11:32.515 Test: blockdev write read block ...passed 00:11:32.515 Test: blockdev write zeroes read block ...passed 00:11:32.515 Test: blockdev write zeroes read no split ...passed 00:11:32.515 Test: blockdev write zeroes read split ...passed 00:11:32.515 Test: blockdev write zeroes read split partial ...passed 00:11:32.515 Test: blockdev reset ...[2024-07-23 04:07:25.654597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:32.515 [2024-07-23 04:07:25.654700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3c130 (9): Bad file descriptor 00:11:32.515 [2024-07-23 04:07:25.666495] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:32.515 passed 00:11:32.515 Test: blockdev write read 8 blocks ...passed 00:11:32.515 Test: blockdev write read size > 128k ...passed 00:11:32.515 Test: blockdev write read invalid size ...passed 00:11:32.515 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:32.515 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:32.515 Test: blockdev write read max offset ...passed 00:11:32.515 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:32.515 Test: blockdev writev readv 8 blocks ...passed 00:11:32.515 Test: blockdev writev readv 30 x 1block ...passed 00:11:32.515 Test: blockdev writev readv block ...passed 00:11:32.515 Test: blockdev writev readv size > 128k ...passed 00:11:32.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:32.516 Test: blockdev comparev and writev ...[2024-07-23 04:07:25.679575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.516 [2024-07-23 04:07:25.679623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.679651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.516 [2024-07-23 04:07:25.679665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.680039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.516 [2024-07-23 04:07:25.680068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.680091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.516 [2024-07-23 04:07:25.680105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.680416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.516 [2024-07-23 04:07:25.680443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.680466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.516 [2024-07-23 04:07:25.680480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.680824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.516 [2024-07-23 04:07:25.680850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.680873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:32.516 [2024-07-23 04:07:25.680886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:32.516 passed 00:11:32.516 Test: blockdev nvme passthru rw ...passed 00:11:32.516 Test: blockdev nvme passthru vendor specific ...passed 00:11:32.516 Test: blockdev nvme admin passthru ...[2024-07-23 04:07:25.682513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:32.516 [2024-07-23 04:07:25.682555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.682720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:32.516 [2024-07-23 04:07:25.682746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.682884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:32.516 [2024-07-23 04:07:25.682925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:32.516 [2024-07-23 04:07:25.683097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:32.516 [2024-07-23 04:07:25.683124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:32.516 passed 00:11:32.516 Test: blockdev copy ...passed 00:11:32.516 00:11:32.516 Run Summary: Type Total Ran Passed Failed Inactive 00:11:32.516 suites 1 1 n/a 0 0 00:11:32.516 tests 23 23 23 0 0 00:11:32.516 asserts 152 152 152 0 n/a 00:11:32.516 00:11:32.516 Elapsed time = 0.149 seconds 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.775 04:07:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.775 rmmod nvme_tcp 00:11:32.775 rmmod nvme_fabrics 00:11:32.775 rmmod nvme_keyring 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 82674 ']' 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 82674 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 82674 ']' 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 82674 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82674 00:11:32.775 killing process with pid 82674 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82674' 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 82674 00:11:32.775 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 82674 00:11:33.034 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:33.034 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:33.034 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:33.034 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.034 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.034 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.034 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.034 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:33.293 ************************************ 00:11:33.293 END TEST nvmf_bdevio 00:11:33.293 ************************************ 00:11:33.293 00:11:33.293 real 0m2.768s 00:11:33.293 user 0m9.171s 00:11:33.293 sys 0m0.777s 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:33.293 ************************************ 00:11:33.293 END TEST nvmf_target_core 00:11:33.293 ************************************ 00:11:33.293 00:11:33.293 real 2m31.605s 00:11:33.293 user 6m43.140s 00:11:33.293 sys 0m53.249s 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.293 04:07:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:33.293 04:07:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:33.293 04:07:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:33.293 04:07:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.293 04:07:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.293 ************************************ 00:11:33.293 START TEST nvmf_target_extra 00:11:33.293 ************************************ 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:33.293 * Looking for test storage... 00:11:33.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:33.293 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:33.294 04:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:33.294 04:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:33.294 04:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.294 04:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:33.294 ************************************ 00:11:33.294 START TEST nvmf_auth_target 00:11:33.294 ************************************ 00:11:33.294 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:33.553 * Looking for test storage... 00:11:33.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:33.553 Cannot find device "nvmf_tgt_br" 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.553 Cannot find device "nvmf_tgt_br2" 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:33.553 Cannot find device "nvmf_tgt_br" 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:33.553 Cannot find device "nvmf_tgt_br2" 00:11:33.553 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:33.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:33.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:33.554 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:33.812 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:33.813 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:33.813 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:33.813 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:33.813 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:33.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:33.813 00:11:33.813 --- 10.0.0.2 ping statistics --- 00:11:33.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.813 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:33.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:33.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:11:33.813 00:11:33.813 --- 10.0.0.3 ping statistics --- 00:11:33.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.813 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:33.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:33.813 00:11:33.813 --- 10.0.0.1 ping statistics --- 00:11:33.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.813 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82935 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82935 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82935 ']' 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.813 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=82954 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bab957d4f1cae25d941fe26491593d6936f476eefcc0835b 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ibs 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bab957d4f1cae25d941fe26491593d6936f476eefcc0835b 0 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bab957d4f1cae25d941fe26491593d6936f476eefcc0835b 0 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bab957d4f1cae25d941fe26491593d6936f476eefcc0835b 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ibs 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ibs 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ibs 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f670220de48d1f252185373b93fe7e4da2328de30c57bd70c7c56a2c52ac43e6 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pmB 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f670220de48d1f252185373b93fe7e4da2328de30c57bd70c7c56a2c52ac43e6 3 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f670220de48d1f252185373b93fe7e4da2328de30c57bd70c7c56a2c52ac43e6 3 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f670220de48d1f252185373b93fe7e4da2328de30c57bd70c7c56a2c52ac43e6 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pmB 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pmB 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.pmB 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e5ced66eea0bd369dddd4f51788f0511 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zva 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e5ced66eea0bd369dddd4f51788f0511 1 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e5ced66eea0bd369dddd4f51788f0511 1 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e5ced66eea0bd369dddd4f51788f0511 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zva 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zva 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.zva 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d0e0e711328d21a9ac55dfe73e0c0440bb84dc14d1d380df 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.EV4 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d0e0e711328d21a9ac55dfe73e0c0440bb84dc14d1d380df 2 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d0e0e711328d21a9ac55dfe73e0c0440bb84dc14d1d380df 2 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d0e0e711328d21a9ac55dfe73e0c0440bb84dc14d1d380df 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:34.381 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.EV4 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.EV4 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.EV4 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=22ade3cb04a0eb007fd4f8a5cb172624ac999f48d63e1ae9 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GY8 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 22ade3cb04a0eb007fd4f8a5cb172624ac999f48d63e1ae9 2 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 22ade3cb04a0eb007fd4f8a5cb172624ac999f48d63e1ae9 2 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=22ade3cb04a0eb007fd4f8a5cb172624ac999f48d63e1ae9 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GY8 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GY8 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.GY8 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dc818c547097f31af428ff1fd28c420d 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Lc5 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dc818c547097f31af428ff1fd28c420d 1 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dc818c547097f31af428ff1fd28c420d 1 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dc818c547097f31af428ff1fd28c420d 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Lc5 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Lc5 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Lc5 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0a85ce15776aa7aab7295b508c511ab3cc1cba57602aaa145b8571da589dfe92 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mlE 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0a85ce15776aa7aab7295b508c511ab3cc1cba57602aaa145b8571da589dfe92 3 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0a85ce15776aa7aab7295b508c511ab3cc1cba57602aaa145b8571da589dfe92 3 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0a85ce15776aa7aab7295b508c511ab3cc1cba57602aaa145b8571da589dfe92 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mlE 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mlE 00:11:34.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.mlE 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 82935 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82935 ']' 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.641 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:34.899 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.899 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:34.899 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 82954 /var/tmp/host.sock 00:11:34.899 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82954 ']' 00:11:34.899 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:34.899 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.899 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:34.899 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.900 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ibs 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ibs 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ibs 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.pmB ]] 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pmB 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pmB 00:11:35.467 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pmB 00:11:35.726 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:35.726 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.zva 00:11:35.726 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.726 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.726 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.726 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.zva 00:11:35.726 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.zva 00:11:35.985 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.EV4 ]] 00:11:35.985 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EV4 00:11:35.985 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.985 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.985 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.985 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EV4 00:11:35.985 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EV4 00:11:36.243 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:36.243 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GY8 00:11:36.243 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.243 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.243 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.243 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GY8 00:11:36.243 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GY8 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Lc5 ]] 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lc5 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lc5 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lc5 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.mlE 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.mlE 00:11:36.502 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.mlE 00:11:36.761 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:36.761 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:36.761 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:36.761 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.761 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:36.761 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:37.019 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:37.019 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.019 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:37.019 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:37.020 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:37.020 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.020 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.020 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.020 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.020 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.020 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.020 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.278 00:11:37.278 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.278 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.278 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.536 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.536 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.536 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.536 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.536 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.536 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.536 { 00:11:37.536 "cntlid": 1, 00:11:37.536 "qid": 0, 00:11:37.536 "state": "enabled", 00:11:37.536 "thread": "nvmf_tgt_poll_group_000", 00:11:37.536 "listen_address": { 00:11:37.536 "trtype": "TCP", 00:11:37.536 "adrfam": "IPv4", 00:11:37.536 "traddr": "10.0.0.2", 00:11:37.536 "trsvcid": "4420" 00:11:37.536 }, 00:11:37.536 "peer_address": { 00:11:37.536 "trtype": "TCP", 00:11:37.536 "adrfam": "IPv4", 00:11:37.536 "traddr": "10.0.0.1", 00:11:37.536 "trsvcid": "57160" 00:11:37.536 }, 00:11:37.536 "auth": { 00:11:37.536 "state": "completed", 00:11:37.536 "digest": "sha256", 00:11:37.536 "dhgroup": "null" 00:11:37.536 } 00:11:37.536 } 00:11:37.536 ]' 00:11:37.536 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.794 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.794 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.794 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:37.794 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.794 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.794 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.794 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.052 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:11:42.234 04:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.234 04:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:42.234 04:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.234 04:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.234 04:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.234 04:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.235 04:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:42.235 04:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.235 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.235 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.494 { 00:11:42.494 "cntlid": 3, 00:11:42.494 "qid": 0, 00:11:42.494 "state": "enabled", 00:11:42.494 "thread": "nvmf_tgt_poll_group_000", 00:11:42.494 "listen_address": { 00:11:42.494 "trtype": "TCP", 00:11:42.494 "adrfam": "IPv4", 00:11:42.494 "traddr": "10.0.0.2", 00:11:42.494 "trsvcid": "4420" 00:11:42.494 }, 00:11:42.494 "peer_address": { 00:11:42.494 "trtype": "TCP", 00:11:42.494 "adrfam": "IPv4", 00:11:42.494 "traddr": "10.0.0.1", 00:11:42.494 "trsvcid": "54082" 00:11:42.494 }, 00:11:42.494 "auth": { 00:11:42.494 "state": "completed", 00:11:42.494 "digest": "sha256", 00:11:42.494 "dhgroup": "null" 00:11:42.494 } 00:11:42.494 } 00:11:42.494 ]' 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.494 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.752 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:42.752 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.752 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.752 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.752 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.011 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:11:43.578 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.578 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:43.578 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.578 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.578 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.578 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.578 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.578 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.837 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.095 00:11:44.095 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.095 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.095 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.662 { 00:11:44.662 "cntlid": 5, 00:11:44.662 "qid": 0, 00:11:44.662 "state": "enabled", 00:11:44.662 "thread": "nvmf_tgt_poll_group_000", 00:11:44.662 "listen_address": { 00:11:44.662 "trtype": "TCP", 00:11:44.662 "adrfam": "IPv4", 00:11:44.662 "traddr": "10.0.0.2", 00:11:44.662 "trsvcid": "4420" 00:11:44.662 }, 00:11:44.662 "peer_address": { 00:11:44.662 "trtype": "TCP", 00:11:44.662 "adrfam": "IPv4", 00:11:44.662 "traddr": "10.0.0.1", 00:11:44.662 "trsvcid": "54108" 00:11:44.662 }, 00:11:44.662 "auth": { 00:11:44.662 "state": "completed", 00:11:44.662 "digest": "sha256", 00:11:44.662 "dhgroup": "null" 00:11:44.662 } 00:11:44.662 } 00:11:44.662 ]' 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.662 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.663 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.921 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:11:45.488 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.488 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:45.488 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.488 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.488 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.488 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.488 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.488 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:45.747 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.314 00:11:46.314 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.314 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.314 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.573 { 00:11:46.573 "cntlid": 7, 00:11:46.573 "qid": 0, 00:11:46.573 "state": "enabled", 00:11:46.573 "thread": "nvmf_tgt_poll_group_000", 00:11:46.573 "listen_address": { 00:11:46.573 "trtype": "TCP", 00:11:46.573 "adrfam": "IPv4", 00:11:46.573 "traddr": "10.0.0.2", 00:11:46.573 "trsvcid": "4420" 00:11:46.573 }, 00:11:46.573 "peer_address": { 00:11:46.573 "trtype": "TCP", 00:11:46.573 "adrfam": "IPv4", 00:11:46.573 "traddr": "10.0.0.1", 00:11:46.573 "trsvcid": "54134" 00:11:46.573 }, 00:11:46.573 "auth": { 00:11:46.573 "state": "completed", 00:11:46.573 "digest": "sha256", 00:11:46.573 "dhgroup": "null" 00:11:46.573 } 00:11:46.573 } 00:11:46.573 ]' 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.573 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.832 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:47.399 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.657 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.658 04:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.915 00:11:47.915 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.915 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.915 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.173 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.173 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.173 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.173 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.173 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.173 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.173 { 00:11:48.173 "cntlid": 9, 00:11:48.173 "qid": 0, 00:11:48.173 "state": "enabled", 00:11:48.173 "thread": "nvmf_tgt_poll_group_000", 00:11:48.173 "listen_address": { 00:11:48.173 "trtype": "TCP", 00:11:48.173 "adrfam": "IPv4", 00:11:48.173 "traddr": "10.0.0.2", 00:11:48.173 "trsvcid": "4420" 00:11:48.173 }, 00:11:48.173 "peer_address": { 00:11:48.173 "trtype": "TCP", 00:11:48.173 "adrfam": "IPv4", 00:11:48.173 "traddr": "10.0.0.1", 00:11:48.174 "trsvcid": "54170" 00:11:48.174 }, 00:11:48.174 "auth": { 00:11:48.174 "state": "completed", 00:11:48.174 "digest": "sha256", 00:11:48.174 "dhgroup": "ffdhe2048" 00:11:48.174 } 00:11:48.174 } 00:11:48.174 ]' 00:11:48.174 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.174 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.174 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.443 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.443 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.444 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.444 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.444 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.736 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:11:49.302 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.302 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:49.302 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.302 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.303 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.303 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.303 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:49.303 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.561 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.820 00:11:49.820 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.820 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.820 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.086 { 00:11:50.086 "cntlid": 11, 00:11:50.086 "qid": 0, 00:11:50.086 "state": "enabled", 00:11:50.086 "thread": "nvmf_tgt_poll_group_000", 00:11:50.086 "listen_address": { 00:11:50.086 "trtype": "TCP", 00:11:50.086 "adrfam": "IPv4", 00:11:50.086 "traddr": "10.0.0.2", 00:11:50.086 "trsvcid": "4420" 00:11:50.086 }, 00:11:50.086 "peer_address": { 00:11:50.086 "trtype": "TCP", 00:11:50.086 "adrfam": "IPv4", 00:11:50.086 "traddr": "10.0.0.1", 00:11:50.086 "trsvcid": "35964" 00:11:50.086 }, 00:11:50.086 "auth": { 00:11:50.086 "state": "completed", 00:11:50.086 "digest": "sha256", 00:11:50.086 "dhgroup": "ffdhe2048" 00:11:50.086 } 00:11:50.086 } 00:11:50.086 ]' 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.086 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.347 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.347 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.347 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.347 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.347 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.606 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:11:51.173 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.173 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:51.173 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.173 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.173 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.173 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.173 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:51.173 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.432 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.000 00:11:52.000 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.000 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.000 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.259 { 00:11:52.259 "cntlid": 13, 00:11:52.259 "qid": 0, 00:11:52.259 "state": "enabled", 00:11:52.259 "thread": "nvmf_tgt_poll_group_000", 00:11:52.259 "listen_address": { 00:11:52.259 "trtype": "TCP", 00:11:52.259 "adrfam": "IPv4", 00:11:52.259 "traddr": "10.0.0.2", 00:11:52.259 "trsvcid": "4420" 00:11:52.259 }, 00:11:52.259 "peer_address": { 00:11:52.259 "trtype": "TCP", 00:11:52.259 "adrfam": "IPv4", 00:11:52.259 "traddr": "10.0.0.1", 00:11:52.259 "trsvcid": "35990" 00:11:52.259 }, 00:11:52.259 "auth": { 00:11:52.259 "state": "completed", 00:11:52.259 "digest": "sha256", 00:11:52.259 "dhgroup": "ffdhe2048" 00:11:52.259 } 00:11:52.259 } 00:11:52.259 ]' 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.259 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.518 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:11:53.085 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.085 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:53.085 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.085 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.085 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.085 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.085 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:53.085 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.344 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:53.912 00:11:53.912 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.912 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.912 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.171 { 00:11:54.171 "cntlid": 15, 00:11:54.171 "qid": 0, 00:11:54.171 "state": "enabled", 00:11:54.171 "thread": "nvmf_tgt_poll_group_000", 00:11:54.171 "listen_address": { 00:11:54.171 "trtype": "TCP", 00:11:54.171 "adrfam": "IPv4", 00:11:54.171 "traddr": "10.0.0.2", 00:11:54.171 "trsvcid": "4420" 00:11:54.171 }, 00:11:54.171 "peer_address": { 00:11:54.171 "trtype": "TCP", 00:11:54.171 "adrfam": "IPv4", 00:11:54.171 "traddr": "10.0.0.1", 00:11:54.171 "trsvcid": "36010" 00:11:54.171 }, 00:11:54.171 "auth": { 00:11:54.171 "state": "completed", 00:11:54.171 "digest": "sha256", 00:11:54.171 "dhgroup": "ffdhe2048" 00:11:54.171 } 00:11:54.171 } 00:11:54.171 ]' 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.171 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.429 04:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:54.997 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:55.256 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:55.256 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.256 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:55.256 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:55.256 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:55.256 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.256 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.256 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.257 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.257 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.257 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.257 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.515 00:11:55.515 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.515 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.515 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.774 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.774 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.774 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.774 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.774 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.774 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.774 { 00:11:55.774 "cntlid": 17, 00:11:55.774 "qid": 0, 00:11:55.774 "state": "enabled", 00:11:55.774 "thread": "nvmf_tgt_poll_group_000", 00:11:55.774 "listen_address": { 00:11:55.774 "trtype": "TCP", 00:11:55.774 "adrfam": "IPv4", 00:11:55.774 "traddr": "10.0.0.2", 00:11:55.774 "trsvcid": "4420" 00:11:55.774 }, 00:11:55.774 "peer_address": { 00:11:55.774 "trtype": "TCP", 00:11:55.774 "adrfam": "IPv4", 00:11:55.774 "traddr": "10.0.0.1", 00:11:55.774 "trsvcid": "36036" 00:11:55.774 }, 00:11:55.774 "auth": { 00:11:55.774 "state": "completed", 00:11:55.774 "digest": "sha256", 00:11:55.774 "dhgroup": "ffdhe3072" 00:11:55.774 } 00:11:55.774 } 00:11:55.774 ]' 00:11:55.774 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.774 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.033 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.033 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:56.033 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.033 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.033 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.033 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.300 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:11:56.867 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.867 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:56.867 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.867 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.867 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.867 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.867 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:56.867 04:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.867 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.435 00:11:57.435 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.435 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.435 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.435 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.435 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.435 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.435 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.693 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.694 { 00:11:57.694 "cntlid": 19, 00:11:57.694 "qid": 0, 00:11:57.694 "state": "enabled", 00:11:57.694 "thread": "nvmf_tgt_poll_group_000", 00:11:57.694 "listen_address": { 00:11:57.694 "trtype": "TCP", 00:11:57.694 "adrfam": "IPv4", 00:11:57.694 "traddr": "10.0.0.2", 00:11:57.694 "trsvcid": "4420" 00:11:57.694 }, 00:11:57.694 "peer_address": { 00:11:57.694 "trtype": "TCP", 00:11:57.694 "adrfam": "IPv4", 00:11:57.694 "traddr": "10.0.0.1", 00:11:57.694 "trsvcid": "36068" 00:11:57.694 }, 00:11:57.694 "auth": { 00:11:57.694 "state": "completed", 00:11:57.694 "digest": "sha256", 00:11:57.694 "dhgroup": "ffdhe3072" 00:11:57.694 } 00:11:57.694 } 00:11:57.694 ]' 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.694 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.953 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:11:58.520 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.520 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:11:58.520 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.520 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.520 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.520 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.520 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:58.520 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.779 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.038 00:11:59.038 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.038 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.038 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.297 { 00:11:59.297 "cntlid": 21, 00:11:59.297 "qid": 0, 00:11:59.297 "state": "enabled", 00:11:59.297 "thread": "nvmf_tgt_poll_group_000", 00:11:59.297 "listen_address": { 00:11:59.297 "trtype": "TCP", 00:11:59.297 "adrfam": "IPv4", 00:11:59.297 "traddr": "10.0.0.2", 00:11:59.297 "trsvcid": "4420" 00:11:59.297 }, 00:11:59.297 "peer_address": { 00:11:59.297 "trtype": "TCP", 00:11:59.297 "adrfam": "IPv4", 00:11:59.297 "traddr": "10.0.0.1", 00:11:59.297 "trsvcid": "54434" 00:11:59.297 }, 00:11:59.297 "auth": { 00:11:59.297 "state": "completed", 00:11:59.297 "digest": "sha256", 00:11:59.297 "dhgroup": "ffdhe3072" 00:11:59.297 } 00:11:59.297 } 00:11:59.297 ]' 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.297 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.556 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.556 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.556 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.556 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.556 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.814 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:12:00.389 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.389 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:00.389 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.389 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.389 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.389 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.389 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:00.389 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.648 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.906 00:12:00.906 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.906 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.906 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.164 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.164 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.164 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.164 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.164 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.164 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.164 { 00:12:01.164 "cntlid": 23, 00:12:01.164 "qid": 0, 00:12:01.164 "state": "enabled", 00:12:01.164 "thread": "nvmf_tgt_poll_group_000", 00:12:01.164 "listen_address": { 00:12:01.164 "trtype": "TCP", 00:12:01.164 "adrfam": "IPv4", 00:12:01.164 "traddr": "10.0.0.2", 00:12:01.164 "trsvcid": "4420" 00:12:01.164 }, 00:12:01.164 "peer_address": { 00:12:01.164 "trtype": "TCP", 00:12:01.164 "adrfam": "IPv4", 00:12:01.164 "traddr": "10.0.0.1", 00:12:01.164 "trsvcid": "54458" 00:12:01.164 }, 00:12:01.164 "auth": { 00:12:01.164 "state": "completed", 00:12:01.164 "digest": "sha256", 00:12:01.165 "dhgroup": "ffdhe3072" 00:12:01.165 } 00:12:01.165 } 00:12:01.165 ]' 00:12:01.165 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.165 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.165 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.165 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:01.165 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.165 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.165 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.165 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.422 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:01.988 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.246 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.545 00:12:02.546 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.546 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.546 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.803 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.803 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.803 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.803 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.803 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.803 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.803 { 00:12:02.803 "cntlid": 25, 00:12:02.803 "qid": 0, 00:12:02.803 "state": "enabled", 00:12:02.803 "thread": "nvmf_tgt_poll_group_000", 00:12:02.803 "listen_address": { 00:12:02.803 "trtype": "TCP", 00:12:02.803 "adrfam": "IPv4", 00:12:02.803 "traddr": "10.0.0.2", 00:12:02.803 "trsvcid": "4420" 00:12:02.803 }, 00:12:02.803 "peer_address": { 00:12:02.803 "trtype": "TCP", 00:12:02.803 "adrfam": "IPv4", 00:12:02.803 "traddr": "10.0.0.1", 00:12:02.803 "trsvcid": "54468" 00:12:02.803 }, 00:12:02.803 "auth": { 00:12:02.803 "state": "completed", 00:12:02.803 "digest": "sha256", 00:12:02.803 "dhgroup": "ffdhe4096" 00:12:02.803 } 00:12:02.803 } 00:12:02.803 ]' 00:12:02.803 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.061 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.061 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.061 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.061 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.061 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.061 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.061 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.319 04:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:12:03.886 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.886 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:03.886 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.886 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.886 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.886 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.886 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:03.886 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.144 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.710 00:12:04.710 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.710 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.710 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.710 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.710 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.710 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.710 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.711 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.711 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.711 { 00:12:04.711 "cntlid": 27, 00:12:04.711 "qid": 0, 00:12:04.711 "state": "enabled", 00:12:04.711 "thread": "nvmf_tgt_poll_group_000", 00:12:04.711 "listen_address": { 00:12:04.711 "trtype": "TCP", 00:12:04.711 "adrfam": "IPv4", 00:12:04.711 "traddr": "10.0.0.2", 00:12:04.711 "trsvcid": "4420" 00:12:04.711 }, 00:12:04.711 "peer_address": { 00:12:04.711 "trtype": "TCP", 00:12:04.711 "adrfam": "IPv4", 00:12:04.711 "traddr": "10.0.0.1", 00:12:04.711 "trsvcid": "54504" 00:12:04.711 }, 00:12:04.711 "auth": { 00:12:04.711 "state": "completed", 00:12:04.711 "digest": "sha256", 00:12:04.711 "dhgroup": "ffdhe4096" 00:12:04.711 } 00:12:04.711 } 00:12:04.711 ]' 00:12:04.711 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.968 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:04.968 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.968 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:04.968 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.968 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.968 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.968 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.227 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:12:05.793 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.793 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:05.793 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.793 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.793 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.793 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.793 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.793 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.051 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.310 00:12:06.310 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.310 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.310 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.568 { 00:12:06.568 "cntlid": 29, 00:12:06.568 "qid": 0, 00:12:06.568 "state": "enabled", 00:12:06.568 "thread": "nvmf_tgt_poll_group_000", 00:12:06.568 "listen_address": { 00:12:06.568 "trtype": "TCP", 00:12:06.568 "adrfam": "IPv4", 00:12:06.568 "traddr": "10.0.0.2", 00:12:06.568 "trsvcid": "4420" 00:12:06.568 }, 00:12:06.568 "peer_address": { 00:12:06.568 "trtype": "TCP", 00:12:06.568 "adrfam": "IPv4", 00:12:06.568 "traddr": "10.0.0.1", 00:12:06.568 "trsvcid": "54522" 00:12:06.568 }, 00:12:06.568 "auth": { 00:12:06.568 "state": "completed", 00:12:06.568 "digest": "sha256", 00:12:06.568 "dhgroup": "ffdhe4096" 00:12:06.568 } 00:12:06.568 } 00:12:06.568 ]' 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.568 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.827 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.827 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.827 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.827 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.827 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.086 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:12:07.651 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.651 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:07.651 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.651 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.651 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.651 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.651 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.651 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:07.910 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.168 00:12:08.168 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.168 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.168 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.425 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.425 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.425 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.425 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.425 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.425 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.425 { 00:12:08.425 "cntlid": 31, 00:12:08.425 "qid": 0, 00:12:08.425 "state": "enabled", 00:12:08.425 "thread": "nvmf_tgt_poll_group_000", 00:12:08.425 "listen_address": { 00:12:08.425 "trtype": "TCP", 00:12:08.425 "adrfam": "IPv4", 00:12:08.425 "traddr": "10.0.0.2", 00:12:08.425 "trsvcid": "4420" 00:12:08.425 }, 00:12:08.425 "peer_address": { 00:12:08.425 "trtype": "TCP", 00:12:08.425 "adrfam": "IPv4", 00:12:08.425 "traddr": "10.0.0.1", 00:12:08.425 "trsvcid": "54548" 00:12:08.425 }, 00:12:08.425 "auth": { 00:12:08.425 "state": "completed", 00:12:08.425 "digest": "sha256", 00:12:08.425 "dhgroup": "ffdhe4096" 00:12:08.425 } 00:12:08.425 } 00:12:08.425 ]' 00:12:08.426 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.426 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.426 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.426 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.426 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.682 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.682 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.682 04:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.682 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.618 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.183 00:12:10.183 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.183 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.183 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.441 { 00:12:10.441 "cntlid": 33, 00:12:10.441 "qid": 0, 00:12:10.441 "state": "enabled", 00:12:10.441 "thread": "nvmf_tgt_poll_group_000", 00:12:10.441 "listen_address": { 00:12:10.441 "trtype": "TCP", 00:12:10.441 "adrfam": "IPv4", 00:12:10.441 "traddr": "10.0.0.2", 00:12:10.441 "trsvcid": "4420" 00:12:10.441 }, 00:12:10.441 "peer_address": { 00:12:10.441 "trtype": "TCP", 00:12:10.441 "adrfam": "IPv4", 00:12:10.441 "traddr": "10.0.0.1", 00:12:10.441 "trsvcid": "51898" 00:12:10.441 }, 00:12:10.441 "auth": { 00:12:10.441 "state": "completed", 00:12:10.441 "digest": "sha256", 00:12:10.441 "dhgroup": "ffdhe6144" 00:12:10.441 } 00:12:10.441 } 00:12:10.441 ]' 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.441 04:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.699 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:12:11.265 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.265 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:11.265 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.265 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.265 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.265 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.265 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:11.265 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.524 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.090 00:12:12.090 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.090 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.090 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.348 { 00:12:12.348 "cntlid": 35, 00:12:12.348 "qid": 0, 00:12:12.348 "state": "enabled", 00:12:12.348 "thread": "nvmf_tgt_poll_group_000", 00:12:12.348 "listen_address": { 00:12:12.348 "trtype": "TCP", 00:12:12.348 "adrfam": "IPv4", 00:12:12.348 "traddr": "10.0.0.2", 00:12:12.348 "trsvcid": "4420" 00:12:12.348 }, 00:12:12.348 "peer_address": { 00:12:12.348 "trtype": "TCP", 00:12:12.348 "adrfam": "IPv4", 00:12:12.348 "traddr": "10.0.0.1", 00:12:12.348 "trsvcid": "51916" 00:12:12.348 }, 00:12:12.348 "auth": { 00:12:12.348 "state": "completed", 00:12:12.348 "digest": "sha256", 00:12:12.348 "dhgroup": "ffdhe6144" 00:12:12.348 } 00:12:12.348 } 00:12:12.348 ]' 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.348 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.607 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.541 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.126 00:12:14.126 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.126 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.126 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.393 { 00:12:14.393 "cntlid": 37, 00:12:14.393 "qid": 0, 00:12:14.393 "state": "enabled", 00:12:14.393 "thread": "nvmf_tgt_poll_group_000", 00:12:14.393 "listen_address": { 00:12:14.393 "trtype": "TCP", 00:12:14.393 "adrfam": "IPv4", 00:12:14.393 "traddr": "10.0.0.2", 00:12:14.393 "trsvcid": "4420" 00:12:14.393 }, 00:12:14.393 "peer_address": { 00:12:14.393 "trtype": "TCP", 00:12:14.393 "adrfam": "IPv4", 00:12:14.393 "traddr": "10.0.0.1", 00:12:14.393 "trsvcid": "51948" 00:12:14.393 }, 00:12:14.393 "auth": { 00:12:14.393 "state": "completed", 00:12:14.393 "digest": "sha256", 00:12:14.393 "dhgroup": "ffdhe6144" 00:12:14.393 } 00:12:14.393 } 00:12:14.393 ]' 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.393 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.394 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.652 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:12:15.587 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.587 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:15.587 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.587 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.587 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.587 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.587 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:15.587 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:15.846 04:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.104 00:12:16.104 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.104 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.104 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.362 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.362 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.362 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.362 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.620 { 00:12:16.620 "cntlid": 39, 00:12:16.620 "qid": 0, 00:12:16.620 "state": "enabled", 00:12:16.620 "thread": "nvmf_tgt_poll_group_000", 00:12:16.620 "listen_address": { 00:12:16.620 "trtype": "TCP", 00:12:16.620 "adrfam": "IPv4", 00:12:16.620 "traddr": "10.0.0.2", 00:12:16.620 "trsvcid": "4420" 00:12:16.620 }, 00:12:16.620 "peer_address": { 00:12:16.620 "trtype": "TCP", 00:12:16.620 "adrfam": "IPv4", 00:12:16.620 "traddr": "10.0.0.1", 00:12:16.620 "trsvcid": "51964" 00:12:16.620 }, 00:12:16.620 "auth": { 00:12:16.620 "state": "completed", 00:12:16.620 "digest": "sha256", 00:12:16.620 "dhgroup": "ffdhe6144" 00:12:16.620 } 00:12:16.620 } 00:12:16.620 ]' 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.620 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.879 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:12:17.445 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.445 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:17.445 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.445 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.703 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.703 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.703 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.703 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:17.703 04:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.961 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.529 00:12:18.529 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.529 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.529 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.787 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.787 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.787 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.787 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.787 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.787 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.787 { 00:12:18.787 "cntlid": 41, 00:12:18.787 "qid": 0, 00:12:18.787 "state": "enabled", 00:12:18.787 "thread": "nvmf_tgt_poll_group_000", 00:12:18.787 "listen_address": { 00:12:18.787 "trtype": "TCP", 00:12:18.787 "adrfam": "IPv4", 00:12:18.787 "traddr": "10.0.0.2", 00:12:18.787 "trsvcid": "4420" 00:12:18.787 }, 00:12:18.787 "peer_address": { 00:12:18.787 "trtype": "TCP", 00:12:18.787 "adrfam": "IPv4", 00:12:18.787 "traddr": "10.0.0.1", 00:12:18.787 "trsvcid": "51986" 00:12:18.787 }, 00:12:18.787 "auth": { 00:12:18.787 "state": "completed", 00:12:18.787 "digest": "sha256", 00:12:18.787 "dhgroup": "ffdhe8192" 00:12:18.787 } 00:12:18.787 } 00:12:18.787 ]' 00:12:18.788 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.788 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.788 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.788 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.788 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.046 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.046 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.046 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.307 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:12:19.874 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.874 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:19.875 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.875 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.875 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.875 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.875 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:19.875 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.135 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.701 00:12:20.701 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.701 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.701 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.961 { 00:12:20.961 "cntlid": 43, 00:12:20.961 "qid": 0, 00:12:20.961 "state": "enabled", 00:12:20.961 "thread": "nvmf_tgt_poll_group_000", 00:12:20.961 "listen_address": { 00:12:20.961 "trtype": "TCP", 00:12:20.961 "adrfam": "IPv4", 00:12:20.961 "traddr": "10.0.0.2", 00:12:20.961 "trsvcid": "4420" 00:12:20.961 }, 00:12:20.961 "peer_address": { 00:12:20.961 "trtype": "TCP", 00:12:20.961 "adrfam": "IPv4", 00:12:20.961 "traddr": "10.0.0.1", 00:12:20.961 "trsvcid": "49980" 00:12:20.961 }, 00:12:20.961 "auth": { 00:12:20.961 "state": "completed", 00:12:20.961 "digest": "sha256", 00:12:20.961 "dhgroup": "ffdhe8192" 00:12:20.961 } 00:12:20.961 } 00:12:20.961 ]' 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.961 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.220 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.220 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.220 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.479 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:12:22.046 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.046 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:22.046 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.046 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.046 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.046 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.046 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:22.046 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.303 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.868 00:12:22.868 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.868 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.868 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.125 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.125 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.125 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.125 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.125 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.126 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.126 { 00:12:23.126 "cntlid": 45, 00:12:23.126 "qid": 0, 00:12:23.126 "state": "enabled", 00:12:23.126 "thread": "nvmf_tgt_poll_group_000", 00:12:23.126 "listen_address": { 00:12:23.126 "trtype": "TCP", 00:12:23.126 "adrfam": "IPv4", 00:12:23.126 "traddr": "10.0.0.2", 00:12:23.126 "trsvcid": "4420" 00:12:23.126 }, 00:12:23.126 "peer_address": { 00:12:23.126 "trtype": "TCP", 00:12:23.126 "adrfam": "IPv4", 00:12:23.126 "traddr": "10.0.0.1", 00:12:23.126 "trsvcid": "50012" 00:12:23.126 }, 00:12:23.126 "auth": { 00:12:23.126 "state": "completed", 00:12:23.126 "digest": "sha256", 00:12:23.126 "dhgroup": "ffdhe8192" 00:12:23.126 } 00:12:23.126 } 00:12:23.126 ]' 00:12:23.126 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.126 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.126 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.126 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.126 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.383 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.383 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.383 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.640 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:12:24.205 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.205 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:24.205 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.205 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.205 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.205 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.205 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.205 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.462 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.027 00:12:25.027 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.027 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.027 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.285 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.285 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.285 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.285 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.285 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.285 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.285 { 00:12:25.285 "cntlid": 47, 00:12:25.285 "qid": 0, 00:12:25.285 "state": "enabled", 00:12:25.285 "thread": "nvmf_tgt_poll_group_000", 00:12:25.285 "listen_address": { 00:12:25.285 "trtype": "TCP", 00:12:25.285 "adrfam": "IPv4", 00:12:25.285 "traddr": "10.0.0.2", 00:12:25.285 "trsvcid": "4420" 00:12:25.285 }, 00:12:25.285 "peer_address": { 00:12:25.285 "trtype": "TCP", 00:12:25.285 "adrfam": "IPv4", 00:12:25.285 "traddr": "10.0.0.1", 00:12:25.285 "trsvcid": "50038" 00:12:25.285 }, 00:12:25.285 "auth": { 00:12:25.285 "state": "completed", 00:12:25.285 "digest": "sha256", 00:12:25.285 "dhgroup": "ffdhe8192" 00:12:25.285 } 00:12:25.285 } 00:12:25.285 ]' 00:12:25.285 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.542 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.542 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.542 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.542 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.542 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.542 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.542 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.800 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:26.364 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.930 04:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.188 00:12:27.188 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.188 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.188 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.445 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.445 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.445 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.445 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.445 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.445 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.445 { 00:12:27.445 "cntlid": 49, 00:12:27.445 "qid": 0, 00:12:27.445 "state": "enabled", 00:12:27.445 "thread": "nvmf_tgt_poll_group_000", 00:12:27.446 "listen_address": { 00:12:27.446 "trtype": "TCP", 00:12:27.446 "adrfam": "IPv4", 00:12:27.446 "traddr": "10.0.0.2", 00:12:27.446 "trsvcid": "4420" 00:12:27.446 }, 00:12:27.446 "peer_address": { 00:12:27.446 "trtype": "TCP", 00:12:27.446 "adrfam": "IPv4", 00:12:27.446 "traddr": "10.0.0.1", 00:12:27.446 "trsvcid": "50070" 00:12:27.446 }, 00:12:27.446 "auth": { 00:12:27.446 "state": "completed", 00:12:27.446 "digest": "sha384", 00:12:27.446 "dhgroup": "null" 00:12:27.446 } 00:12:27.446 } 00:12:27.446 ]' 00:12:27.446 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.446 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.446 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.446 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:27.446 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.446 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.446 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.446 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.704 04:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.639 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.896 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.896 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.896 04:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.155 00:12:29.155 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.155 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.155 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.413 { 00:12:29.413 "cntlid": 51, 00:12:29.413 "qid": 0, 00:12:29.413 "state": "enabled", 00:12:29.413 "thread": "nvmf_tgt_poll_group_000", 00:12:29.413 "listen_address": { 00:12:29.413 "trtype": "TCP", 00:12:29.413 "adrfam": "IPv4", 00:12:29.413 "traddr": "10.0.0.2", 00:12:29.413 "trsvcid": "4420" 00:12:29.413 }, 00:12:29.413 "peer_address": { 00:12:29.413 "trtype": "TCP", 00:12:29.413 "adrfam": "IPv4", 00:12:29.413 "traddr": "10.0.0.1", 00:12:29.413 "trsvcid": "38118" 00:12:29.413 }, 00:12:29.413 "auth": { 00:12:29.413 "state": "completed", 00:12:29.413 "digest": "sha384", 00:12:29.413 "dhgroup": "null" 00:12:29.413 } 00:12:29.413 } 00:12:29.413 ]' 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.413 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.671 04:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.603 04:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.861 00:12:31.119 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.119 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.119 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.377 { 00:12:31.377 "cntlid": 53, 00:12:31.377 "qid": 0, 00:12:31.377 "state": "enabled", 00:12:31.377 "thread": "nvmf_tgt_poll_group_000", 00:12:31.377 "listen_address": { 00:12:31.377 "trtype": "TCP", 00:12:31.377 "adrfam": "IPv4", 00:12:31.377 "traddr": "10.0.0.2", 00:12:31.377 "trsvcid": "4420" 00:12:31.377 }, 00:12:31.377 "peer_address": { 00:12:31.377 "trtype": "TCP", 00:12:31.377 "adrfam": "IPv4", 00:12:31.377 "traddr": "10.0.0.1", 00:12:31.377 "trsvcid": "38142" 00:12:31.377 }, 00:12:31.377 "auth": { 00:12:31.377 "state": "completed", 00:12:31.377 "digest": "sha384", 00:12:31.377 "dhgroup": "null" 00:12:31.377 } 00:12:31.377 } 00:12:31.377 ]' 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.377 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.635 04:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:12:32.201 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.460 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:32.460 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.460 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.460 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.460 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.460 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:32.460 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.718 04:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.977 00:12:32.977 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.977 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.977 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.242 { 00:12:33.242 "cntlid": 55, 00:12:33.242 "qid": 0, 00:12:33.242 "state": "enabled", 00:12:33.242 "thread": "nvmf_tgt_poll_group_000", 00:12:33.242 "listen_address": { 00:12:33.242 "trtype": "TCP", 00:12:33.242 "adrfam": "IPv4", 00:12:33.242 "traddr": "10.0.0.2", 00:12:33.242 "trsvcid": "4420" 00:12:33.242 }, 00:12:33.242 "peer_address": { 00:12:33.242 "trtype": "TCP", 00:12:33.242 "adrfam": "IPv4", 00:12:33.242 "traddr": "10.0.0.1", 00:12:33.242 "trsvcid": "38182" 00:12:33.242 }, 00:12:33.242 "auth": { 00:12:33.242 "state": "completed", 00:12:33.242 "digest": "sha384", 00:12:33.242 "dhgroup": "null" 00:12:33.242 } 00:12:33.242 } 00:12:33.242 ]' 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.242 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.508 04:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:34.075 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.334 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.591 00:12:34.591 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.591 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.591 04:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.850 { 00:12:34.850 "cntlid": 57, 00:12:34.850 "qid": 0, 00:12:34.850 "state": "enabled", 00:12:34.850 "thread": "nvmf_tgt_poll_group_000", 00:12:34.850 "listen_address": { 00:12:34.850 "trtype": "TCP", 00:12:34.850 "adrfam": "IPv4", 00:12:34.850 "traddr": "10.0.0.2", 00:12:34.850 "trsvcid": "4420" 00:12:34.850 }, 00:12:34.850 "peer_address": { 00:12:34.850 "trtype": "TCP", 00:12:34.850 "adrfam": "IPv4", 00:12:34.850 "traddr": "10.0.0.1", 00:12:34.850 "trsvcid": "38224" 00:12:34.850 }, 00:12:34.850 "auth": { 00:12:34.850 "state": "completed", 00:12:34.850 "digest": "sha384", 00:12:34.850 "dhgroup": "ffdhe2048" 00:12:34.850 } 00:12:34.850 } 00:12:34.850 ]' 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.850 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.107 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:35.107 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.107 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.107 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.107 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.365 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:12:35.929 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.929 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:35.929 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.929 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.929 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.929 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.929 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:35.929 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:36.187 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:36.187 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.187 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:36.187 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:36.187 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:36.187 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.187 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.188 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.188 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.188 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.188 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.188 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.445 00:12:36.445 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.445 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.445 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.703 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.704 { 00:12:36.704 "cntlid": 59, 00:12:36.704 "qid": 0, 00:12:36.704 "state": "enabled", 00:12:36.704 "thread": "nvmf_tgt_poll_group_000", 00:12:36.704 "listen_address": { 00:12:36.704 "trtype": "TCP", 00:12:36.704 "adrfam": "IPv4", 00:12:36.704 "traddr": "10.0.0.2", 00:12:36.704 "trsvcid": "4420" 00:12:36.704 }, 00:12:36.704 "peer_address": { 00:12:36.704 "trtype": "TCP", 00:12:36.704 "adrfam": "IPv4", 00:12:36.704 "traddr": "10.0.0.1", 00:12:36.704 "trsvcid": "38262" 00:12:36.704 }, 00:12:36.704 "auth": { 00:12:36.704 "state": "completed", 00:12:36.704 "digest": "sha384", 00:12:36.704 "dhgroup": "ffdhe2048" 00:12:36.704 } 00:12:36.704 } 00:12:36.704 ]' 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.704 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.704 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.704 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.704 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.962 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:12:37.528 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.528 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:37.528 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.528 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.528 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.528 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.528 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:37.528 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.786 04:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.044 00:12:38.044 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.044 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.044 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.303 { 00:12:38.303 "cntlid": 61, 00:12:38.303 "qid": 0, 00:12:38.303 "state": "enabled", 00:12:38.303 "thread": "nvmf_tgt_poll_group_000", 00:12:38.303 "listen_address": { 00:12:38.303 "trtype": "TCP", 00:12:38.303 "adrfam": "IPv4", 00:12:38.303 "traddr": "10.0.0.2", 00:12:38.303 "trsvcid": "4420" 00:12:38.303 }, 00:12:38.303 "peer_address": { 00:12:38.303 "trtype": "TCP", 00:12:38.303 "adrfam": "IPv4", 00:12:38.303 "traddr": "10.0.0.1", 00:12:38.303 "trsvcid": "38294" 00:12:38.303 }, 00:12:38.303 "auth": { 00:12:38.303 "state": "completed", 00:12:38.303 "digest": "sha384", 00:12:38.303 "dhgroup": "ffdhe2048" 00:12:38.303 } 00:12:38.303 } 00:12:38.303 ]' 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.303 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.561 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.820 04:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:39.412 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.413 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.671 00:12:39.671 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.671 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.671 04:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.929 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.929 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.929 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.929 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.929 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.929 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.929 { 00:12:39.929 "cntlid": 63, 00:12:39.929 "qid": 0, 00:12:39.929 "state": "enabled", 00:12:39.929 "thread": "nvmf_tgt_poll_group_000", 00:12:39.929 "listen_address": { 00:12:39.929 "trtype": "TCP", 00:12:39.929 "adrfam": "IPv4", 00:12:39.929 "traddr": "10.0.0.2", 00:12:39.929 "trsvcid": "4420" 00:12:39.929 }, 00:12:39.929 "peer_address": { 00:12:39.929 "trtype": "TCP", 00:12:39.929 "adrfam": "IPv4", 00:12:39.929 "traddr": "10.0.0.1", 00:12:39.929 "trsvcid": "45558" 00:12:39.929 }, 00:12:39.929 "auth": { 00:12:39.929 "state": "completed", 00:12:39.929 "digest": "sha384", 00:12:39.929 "dhgroup": "ffdhe2048" 00:12:39.929 } 00:12:39.929 } 00:12:39.929 ]' 00:12:39.929 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.186 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.186 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.186 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.186 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.186 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.186 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.186 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.443 04:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:41.009 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.267 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.525 00:12:41.525 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.525 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.525 04:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.782 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.782 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.782 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.782 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.782 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.782 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.782 { 00:12:41.782 "cntlid": 65, 00:12:41.782 "qid": 0, 00:12:41.782 "state": "enabled", 00:12:41.782 "thread": "nvmf_tgt_poll_group_000", 00:12:41.782 "listen_address": { 00:12:41.782 "trtype": "TCP", 00:12:41.782 "adrfam": "IPv4", 00:12:41.782 "traddr": "10.0.0.2", 00:12:41.782 "trsvcid": "4420" 00:12:41.782 }, 00:12:41.782 "peer_address": { 00:12:41.782 "trtype": "TCP", 00:12:41.782 "adrfam": "IPv4", 00:12:41.782 "traddr": "10.0.0.1", 00:12:41.782 "trsvcid": "45592" 00:12:41.782 }, 00:12:41.782 "auth": { 00:12:41.782 "state": "completed", 00:12:41.782 "digest": "sha384", 00:12:41.782 "dhgroup": "ffdhe3072" 00:12:41.782 } 00:12:41.782 } 00:12:41.783 ]' 00:12:41.783 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.783 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.783 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.783 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:41.783 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.040 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.040 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.040 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.298 04:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:12:42.864 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.864 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:42.865 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.865 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.865 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.865 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.865 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:42.865 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:43.123 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:43.123 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.123 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:43.123 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:43.123 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:43.123 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.124 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.124 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.124 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.124 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.124 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.124 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.382 00:12:43.382 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.382 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.382 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.640 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.640 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.640 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.640 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.640 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.640 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.640 { 00:12:43.640 "cntlid": 67, 00:12:43.640 "qid": 0, 00:12:43.640 "state": "enabled", 00:12:43.640 "thread": "nvmf_tgt_poll_group_000", 00:12:43.640 "listen_address": { 00:12:43.640 "trtype": "TCP", 00:12:43.640 "adrfam": "IPv4", 00:12:43.640 "traddr": "10.0.0.2", 00:12:43.640 "trsvcid": "4420" 00:12:43.640 }, 00:12:43.640 "peer_address": { 00:12:43.640 "trtype": "TCP", 00:12:43.640 "adrfam": "IPv4", 00:12:43.641 "traddr": "10.0.0.1", 00:12:43.641 "trsvcid": "45616" 00:12:43.641 }, 00:12:43.641 "auth": { 00:12:43.641 "state": "completed", 00:12:43.641 "digest": "sha384", 00:12:43.641 "dhgroup": "ffdhe3072" 00:12:43.641 } 00:12:43.641 } 00:12:43.641 ]' 00:12:43.641 04:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.898 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.898 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.898 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:43.898 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.898 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.898 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.898 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.156 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:12:44.721 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.721 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:44.721 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.721 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.721 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.721 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.721 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:44.721 04:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.979 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.236 00:12:45.236 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.236 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.236 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.495 { 00:12:45.495 "cntlid": 69, 00:12:45.495 "qid": 0, 00:12:45.495 "state": "enabled", 00:12:45.495 "thread": "nvmf_tgt_poll_group_000", 00:12:45.495 "listen_address": { 00:12:45.495 "trtype": "TCP", 00:12:45.495 "adrfam": "IPv4", 00:12:45.495 "traddr": "10.0.0.2", 00:12:45.495 "trsvcid": "4420" 00:12:45.495 }, 00:12:45.495 "peer_address": { 00:12:45.495 "trtype": "TCP", 00:12:45.495 "adrfam": "IPv4", 00:12:45.495 "traddr": "10.0.0.1", 00:12:45.495 "trsvcid": "45634" 00:12:45.495 }, 00:12:45.495 "auth": { 00:12:45.495 "state": "completed", 00:12:45.495 "digest": "sha384", 00:12:45.495 "dhgroup": "ffdhe3072" 00:12:45.495 } 00:12:45.495 } 00:12:45.495 ]' 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.495 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.753 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.753 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.753 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.753 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.753 04:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.015 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:12:46.581 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.581 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:46.581 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.581 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.581 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.581 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.581 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:46.581 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.840 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.841 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.099 00:12:47.099 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.358 { 00:12:47.358 "cntlid": 71, 00:12:47.358 "qid": 0, 00:12:47.358 "state": "enabled", 00:12:47.358 "thread": "nvmf_tgt_poll_group_000", 00:12:47.358 "listen_address": { 00:12:47.358 "trtype": "TCP", 00:12:47.358 "adrfam": "IPv4", 00:12:47.358 "traddr": "10.0.0.2", 00:12:47.358 "trsvcid": "4420" 00:12:47.358 }, 00:12:47.358 "peer_address": { 00:12:47.358 "trtype": "TCP", 00:12:47.358 "adrfam": "IPv4", 00:12:47.358 "traddr": "10.0.0.1", 00:12:47.358 "trsvcid": "45670" 00:12:47.358 }, 00:12:47.358 "auth": { 00:12:47.358 "state": "completed", 00:12:47.358 "digest": "sha384", 00:12:47.358 "dhgroup": "ffdhe3072" 00:12:47.358 } 00:12:47.358 } 00:12:47.358 ]' 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.358 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.616 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.616 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.616 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.616 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.616 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.884 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:48.456 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.715 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.973 00:12:48.974 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.974 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.974 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.233 { 00:12:49.233 "cntlid": 73, 00:12:49.233 "qid": 0, 00:12:49.233 "state": "enabled", 00:12:49.233 "thread": "nvmf_tgt_poll_group_000", 00:12:49.233 "listen_address": { 00:12:49.233 "trtype": "TCP", 00:12:49.233 "adrfam": "IPv4", 00:12:49.233 "traddr": "10.0.0.2", 00:12:49.233 "trsvcid": "4420" 00:12:49.233 }, 00:12:49.233 "peer_address": { 00:12:49.233 "trtype": "TCP", 00:12:49.233 "adrfam": "IPv4", 00:12:49.233 "traddr": "10.0.0.1", 00:12:49.233 "trsvcid": "55296" 00:12:49.233 }, 00:12:49.233 "auth": { 00:12:49.233 "state": "completed", 00:12:49.233 "digest": "sha384", 00:12:49.233 "dhgroup": "ffdhe4096" 00:12:49.233 } 00:12:49.233 } 00:12:49.233 ]' 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:49.233 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.491 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.491 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.491 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.491 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:12:50.427 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.427 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:50.427 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.427 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.427 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.427 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.428 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.686 00:12:50.686 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.686 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.686 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.945 { 00:12:50.945 "cntlid": 75, 00:12:50.945 "qid": 0, 00:12:50.945 "state": "enabled", 00:12:50.945 "thread": "nvmf_tgt_poll_group_000", 00:12:50.945 "listen_address": { 00:12:50.945 "trtype": "TCP", 00:12:50.945 "adrfam": "IPv4", 00:12:50.945 "traddr": "10.0.0.2", 00:12:50.945 "trsvcid": "4420" 00:12:50.945 }, 00:12:50.945 "peer_address": { 00:12:50.945 "trtype": "TCP", 00:12:50.945 "adrfam": "IPv4", 00:12:50.945 "traddr": "10.0.0.1", 00:12:50.945 "trsvcid": "55326" 00:12:50.945 }, 00:12:50.945 "auth": { 00:12:50.945 "state": "completed", 00:12:50.945 "digest": "sha384", 00:12:50.945 "dhgroup": "ffdhe4096" 00:12:50.945 } 00:12:50.945 } 00:12:50.945 ]' 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.945 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.204 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:51.204 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.204 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.204 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.204 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.462 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:12:52.029 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.029 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:52.029 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.029 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.029 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.029 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.029 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:52.029 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.287 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.545 00:12:52.545 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.545 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.545 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.804 { 00:12:52.804 "cntlid": 77, 00:12:52.804 "qid": 0, 00:12:52.804 "state": "enabled", 00:12:52.804 "thread": "nvmf_tgt_poll_group_000", 00:12:52.804 "listen_address": { 00:12:52.804 "trtype": "TCP", 00:12:52.804 "adrfam": "IPv4", 00:12:52.804 "traddr": "10.0.0.2", 00:12:52.804 "trsvcid": "4420" 00:12:52.804 }, 00:12:52.804 "peer_address": { 00:12:52.804 "trtype": "TCP", 00:12:52.804 "adrfam": "IPv4", 00:12:52.804 "traddr": "10.0.0.1", 00:12:52.804 "trsvcid": "55358" 00:12:52.804 }, 00:12:52.804 "auth": { 00:12:52.804 "state": "completed", 00:12:52.804 "digest": "sha384", 00:12:52.804 "dhgroup": "ffdhe4096" 00:12:52.804 } 00:12:52.804 } 00:12:52.804 ]' 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:52.804 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.804 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:52.804 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.804 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.804 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.804 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.063 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:12:53.630 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.630 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:53.630 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.630 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.630 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.630 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.630 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:53.630 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.890 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.891 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.891 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:54.149 00:12:54.149 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.149 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.149 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.408 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.408 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.408 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.408 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.666 { 00:12:54.666 "cntlid": 79, 00:12:54.666 "qid": 0, 00:12:54.666 "state": "enabled", 00:12:54.666 "thread": "nvmf_tgt_poll_group_000", 00:12:54.666 "listen_address": { 00:12:54.666 "trtype": "TCP", 00:12:54.666 "adrfam": "IPv4", 00:12:54.666 "traddr": "10.0.0.2", 00:12:54.666 "trsvcid": "4420" 00:12:54.666 }, 00:12:54.666 "peer_address": { 00:12:54.666 "trtype": "TCP", 00:12:54.666 "adrfam": "IPv4", 00:12:54.666 "traddr": "10.0.0.1", 00:12:54.666 "trsvcid": "55376" 00:12:54.666 }, 00:12:54.666 "auth": { 00:12:54.666 "state": "completed", 00:12:54.666 "digest": "sha384", 00:12:54.666 "dhgroup": "ffdhe4096" 00:12:54.666 } 00:12:54.666 } 00:12:54.666 ]' 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.666 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.925 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:55.492 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:55.761 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:55.761 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.761 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:55.761 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:55.761 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:55.762 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.762 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.762 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.762 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.762 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.762 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.762 04:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.055 00:12:56.055 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.055 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.055 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.327 { 00:12:56.327 "cntlid": 81, 00:12:56.327 "qid": 0, 00:12:56.327 "state": "enabled", 00:12:56.327 "thread": "nvmf_tgt_poll_group_000", 00:12:56.327 "listen_address": { 00:12:56.327 "trtype": "TCP", 00:12:56.327 "adrfam": "IPv4", 00:12:56.327 "traddr": "10.0.0.2", 00:12:56.327 "trsvcid": "4420" 00:12:56.327 }, 00:12:56.327 "peer_address": { 00:12:56.327 "trtype": "TCP", 00:12:56.327 "adrfam": "IPv4", 00:12:56.327 "traddr": "10.0.0.1", 00:12:56.327 "trsvcid": "55398" 00:12:56.327 }, 00:12:56.327 "auth": { 00:12:56.327 "state": "completed", 00:12:56.327 "digest": "sha384", 00:12:56.327 "dhgroup": "ffdhe6144" 00:12:56.327 } 00:12:56.327 } 00:12:56.327 ]' 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.327 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.585 04:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:12:57.151 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.151 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:57.151 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.151 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.151 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.151 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.151 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:57.151 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.410 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.411 04:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.978 00:12:57.978 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.978 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.978 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.978 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.978 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.978 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.978 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.237 { 00:12:58.237 "cntlid": 83, 00:12:58.237 "qid": 0, 00:12:58.237 "state": "enabled", 00:12:58.237 "thread": "nvmf_tgt_poll_group_000", 00:12:58.237 "listen_address": { 00:12:58.237 "trtype": "TCP", 00:12:58.237 "adrfam": "IPv4", 00:12:58.237 "traddr": "10.0.0.2", 00:12:58.237 "trsvcid": "4420" 00:12:58.237 }, 00:12:58.237 "peer_address": { 00:12:58.237 "trtype": "TCP", 00:12:58.237 "adrfam": "IPv4", 00:12:58.237 "traddr": "10.0.0.1", 00:12:58.237 "trsvcid": "55428" 00:12:58.237 }, 00:12:58.237 "auth": { 00:12:58.237 "state": "completed", 00:12:58.237 "digest": "sha384", 00:12:58.237 "dhgroup": "ffdhe6144" 00:12:58.237 } 00:12:58.237 } 00:12:58.237 ]' 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.237 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.497 04:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:12:59.064 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.064 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:12:59.064 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.064 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.064 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.064 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.064 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:59.064 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.323 04:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.897 00:12:59.897 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.897 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.897 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.897 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.897 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.897 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.897 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.158 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.158 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.158 { 00:13:00.158 "cntlid": 85, 00:13:00.158 "qid": 0, 00:13:00.158 "state": "enabled", 00:13:00.158 "thread": "nvmf_tgt_poll_group_000", 00:13:00.158 "listen_address": { 00:13:00.158 "trtype": "TCP", 00:13:00.158 "adrfam": "IPv4", 00:13:00.158 "traddr": "10.0.0.2", 00:13:00.158 "trsvcid": "4420" 00:13:00.158 }, 00:13:00.158 "peer_address": { 00:13:00.158 "trtype": "TCP", 00:13:00.158 "adrfam": "IPv4", 00:13:00.158 "traddr": "10.0.0.1", 00:13:00.158 "trsvcid": "53504" 00:13:00.158 }, 00:13:00.158 "auth": { 00:13:00.158 "state": "completed", 00:13:00.158 "digest": "sha384", 00:13:00.158 "dhgroup": "ffdhe6144" 00:13:00.158 } 00:13:00.158 } 00:13:00.158 ]' 00:13:00.158 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.158 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.158 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.158 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:00.158 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.159 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.159 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.159 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.419 04:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:13:00.985 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.985 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:00.985 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.985 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.985 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.985 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.985 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:00.985 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:01.243 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:01.502 00:13:01.502 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.502 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.502 04:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.760 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.760 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.760 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.760 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.760 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.760 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.760 { 00:13:01.760 "cntlid": 87, 00:13:01.760 "qid": 0, 00:13:01.760 "state": "enabled", 00:13:01.760 "thread": "nvmf_tgt_poll_group_000", 00:13:01.760 "listen_address": { 00:13:01.760 "trtype": "TCP", 00:13:01.760 "adrfam": "IPv4", 00:13:01.760 "traddr": "10.0.0.2", 00:13:01.760 "trsvcid": "4420" 00:13:01.760 }, 00:13:01.760 "peer_address": { 00:13:01.760 "trtype": "TCP", 00:13:01.760 "adrfam": "IPv4", 00:13:01.760 "traddr": "10.0.0.1", 00:13:01.760 "trsvcid": "53528" 00:13:01.760 }, 00:13:01.760 "auth": { 00:13:01.760 "state": "completed", 00:13:01.760 "digest": "sha384", 00:13:01.760 "dhgroup": "ffdhe6144" 00:13:01.760 } 00:13:01.760 } 00:13:01.760 ]' 00:13:01.760 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.018 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.018 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.018 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.018 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.018 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.018 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.018 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.276 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:02.843 04:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:03.116 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:03.116 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.116 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:03.116 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:03.116 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:03.116 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.117 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.117 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.117 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.117 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.117 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.117 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.702 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.702 { 00:13:03.702 "cntlid": 89, 00:13:03.702 "qid": 0, 00:13:03.702 "state": "enabled", 00:13:03.702 "thread": "nvmf_tgt_poll_group_000", 00:13:03.702 "listen_address": { 00:13:03.702 "trtype": "TCP", 00:13:03.702 "adrfam": "IPv4", 00:13:03.702 "traddr": "10.0.0.2", 00:13:03.702 "trsvcid": "4420" 00:13:03.702 }, 00:13:03.702 "peer_address": { 00:13:03.702 "trtype": "TCP", 00:13:03.702 "adrfam": "IPv4", 00:13:03.702 "traddr": "10.0.0.1", 00:13:03.702 "trsvcid": "53566" 00:13:03.702 }, 00:13:03.702 "auth": { 00:13:03.702 "state": "completed", 00:13:03.702 "digest": "sha384", 00:13:03.702 "dhgroup": "ffdhe8192" 00:13:03.702 } 00:13:03.702 } 00:13:03.702 ]' 00:13:03.702 04:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.702 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.702 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.961 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:03.961 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.961 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.961 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.961 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.220 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:13:04.788 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.788 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:04.788 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.788 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.788 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.788 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.788 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:04.788 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.788 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.047 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.047 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.047 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.306 00:13:05.565 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.565 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.565 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.824 { 00:13:05.824 "cntlid": 91, 00:13:05.824 "qid": 0, 00:13:05.824 "state": "enabled", 00:13:05.824 "thread": "nvmf_tgt_poll_group_000", 00:13:05.824 "listen_address": { 00:13:05.824 "trtype": "TCP", 00:13:05.824 "adrfam": "IPv4", 00:13:05.824 "traddr": "10.0.0.2", 00:13:05.824 "trsvcid": "4420" 00:13:05.824 }, 00:13:05.824 "peer_address": { 00:13:05.824 "trtype": "TCP", 00:13:05.824 "adrfam": "IPv4", 00:13:05.824 "traddr": "10.0.0.1", 00:13:05.824 "trsvcid": "53584" 00:13:05.824 }, 00:13:05.824 "auth": { 00:13:05.824 "state": "completed", 00:13:05.824 "digest": "sha384", 00:13:05.824 "dhgroup": "ffdhe8192" 00:13:05.824 } 00:13:05.824 } 00:13:05.824 ]' 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:05.824 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.824 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:05.824 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.824 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.824 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.824 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.083 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:13:06.649 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.649 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:06.649 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.649 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.649 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.649 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.649 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:06.649 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.908 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.475 00:13:07.475 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.475 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.475 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.733 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.733 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.733 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.733 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.733 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.733 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.733 { 00:13:07.733 "cntlid": 93, 00:13:07.733 "qid": 0, 00:13:07.733 "state": "enabled", 00:13:07.733 "thread": "nvmf_tgt_poll_group_000", 00:13:07.733 "listen_address": { 00:13:07.733 "trtype": "TCP", 00:13:07.733 "adrfam": "IPv4", 00:13:07.733 "traddr": "10.0.0.2", 00:13:07.733 "trsvcid": "4420" 00:13:07.733 }, 00:13:07.733 "peer_address": { 00:13:07.733 "trtype": "TCP", 00:13:07.733 "adrfam": "IPv4", 00:13:07.733 "traddr": "10.0.0.1", 00:13:07.733 "trsvcid": "53608" 00:13:07.733 }, 00:13:07.733 "auth": { 00:13:07.733 "state": "completed", 00:13:07.733 "digest": "sha384", 00:13:07.733 "dhgroup": "ffdhe8192" 00:13:07.733 } 00:13:07.733 } 00:13:07.733 ]' 00:13:07.733 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.733 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.733 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.991 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:07.991 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.991 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.991 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.991 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.250 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:13:08.817 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.817 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:08.817 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.817 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.817 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.817 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.817 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:08.817 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.075 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.641 00:13:09.641 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.641 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.641 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.900 { 00:13:09.900 "cntlid": 95, 00:13:09.900 "qid": 0, 00:13:09.900 "state": "enabled", 00:13:09.900 "thread": "nvmf_tgt_poll_group_000", 00:13:09.900 "listen_address": { 00:13:09.900 "trtype": "TCP", 00:13:09.900 "adrfam": "IPv4", 00:13:09.900 "traddr": "10.0.0.2", 00:13:09.900 "trsvcid": "4420" 00:13:09.900 }, 00:13:09.900 "peer_address": { 00:13:09.900 "trtype": "TCP", 00:13:09.900 "adrfam": "IPv4", 00:13:09.900 "traddr": "10.0.0.1", 00:13:09.900 "trsvcid": "33382" 00:13:09.900 }, 00:13:09.900 "auth": { 00:13:09.900 "state": "completed", 00:13:09.900 "digest": "sha384", 00:13:09.900 "dhgroup": "ffdhe8192" 00:13:09.900 } 00:13:09.900 } 00:13:09.900 ]' 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.900 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.165 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:10.737 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.995 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.996 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.996 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.254 00:13:11.254 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.254 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.254 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.254 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.254 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.254 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.254 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.513 { 00:13:11.513 "cntlid": 97, 00:13:11.513 "qid": 0, 00:13:11.513 "state": "enabled", 00:13:11.513 "thread": "nvmf_tgt_poll_group_000", 00:13:11.513 "listen_address": { 00:13:11.513 "trtype": "TCP", 00:13:11.513 "adrfam": "IPv4", 00:13:11.513 "traddr": "10.0.0.2", 00:13:11.513 "trsvcid": "4420" 00:13:11.513 }, 00:13:11.513 "peer_address": { 00:13:11.513 "trtype": "TCP", 00:13:11.513 "adrfam": "IPv4", 00:13:11.513 "traddr": "10.0.0.1", 00:13:11.513 "trsvcid": "33416" 00:13:11.513 }, 00:13:11.513 "auth": { 00:13:11.513 "state": "completed", 00:13:11.513 "digest": "sha512", 00:13:11.513 "dhgroup": "null" 00:13:11.513 } 00:13:11.513 } 00:13:11.513 ]' 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.513 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.771 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:13:12.338 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.338 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:12.338 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.338 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.338 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.338 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.338 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:12.339 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.597 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.856 00:13:12.856 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.856 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.856 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.115 { 00:13:13.115 "cntlid": 99, 00:13:13.115 "qid": 0, 00:13:13.115 "state": "enabled", 00:13:13.115 "thread": "nvmf_tgt_poll_group_000", 00:13:13.115 "listen_address": { 00:13:13.115 "trtype": "TCP", 00:13:13.115 "adrfam": "IPv4", 00:13:13.115 "traddr": "10.0.0.2", 00:13:13.115 "trsvcid": "4420" 00:13:13.115 }, 00:13:13.115 "peer_address": { 00:13:13.115 "trtype": "TCP", 00:13:13.115 "adrfam": "IPv4", 00:13:13.115 "traddr": "10.0.0.1", 00:13:13.115 "trsvcid": "33448" 00:13:13.115 }, 00:13:13.115 "auth": { 00:13:13.115 "state": "completed", 00:13:13.115 "digest": "sha512", 00:13:13.115 "dhgroup": "null" 00:13:13.115 } 00:13:13.115 } 00:13:13.115 ]' 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.115 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.374 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:13:13.967 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.967 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:13.967 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.967 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.967 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.967 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.967 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:13.967 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.225 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.484 00:13:14.484 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.484 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.484 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.743 { 00:13:14.743 "cntlid": 101, 00:13:14.743 "qid": 0, 00:13:14.743 "state": "enabled", 00:13:14.743 "thread": "nvmf_tgt_poll_group_000", 00:13:14.743 "listen_address": { 00:13:14.743 "trtype": "TCP", 00:13:14.743 "adrfam": "IPv4", 00:13:14.743 "traddr": "10.0.0.2", 00:13:14.743 "trsvcid": "4420" 00:13:14.743 }, 00:13:14.743 "peer_address": { 00:13:14.743 "trtype": "TCP", 00:13:14.743 "adrfam": "IPv4", 00:13:14.743 "traddr": "10.0.0.1", 00:13:14.743 "trsvcid": "33468" 00:13:14.743 }, 00:13:14.743 "auth": { 00:13:14.743 "state": "completed", 00:13:14.743 "digest": "sha512", 00:13:14.743 "dhgroup": "null" 00:13:14.743 } 00:13:14.743 } 00:13:14.743 ]' 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.743 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.743 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:14.743 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:14.743 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.743 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.743 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.001 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:13:15.568 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.568 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:15.568 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.568 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.568 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.568 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.568 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:15.568 04:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:16.134 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:16.134 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.134 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:16.134 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:16.134 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:16.134 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.134 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:13:16.135 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.135 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.135 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.135 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.135 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.135 00:13:16.135 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.135 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.135 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.393 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.393 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.393 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.393 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:16.652 { 00:13:16.652 "cntlid": 103, 00:13:16.652 "qid": 0, 00:13:16.652 "state": "enabled", 00:13:16.652 "thread": "nvmf_tgt_poll_group_000", 00:13:16.652 "listen_address": { 00:13:16.652 "trtype": "TCP", 00:13:16.652 "adrfam": "IPv4", 00:13:16.652 "traddr": "10.0.0.2", 00:13:16.652 "trsvcid": "4420" 00:13:16.652 }, 00:13:16.652 "peer_address": { 00:13:16.652 "trtype": "TCP", 00:13:16.652 "adrfam": "IPv4", 00:13:16.652 "traddr": "10.0.0.1", 00:13:16.652 "trsvcid": "33502" 00:13:16.652 }, 00:13:16.652 "auth": { 00:13:16.652 "state": "completed", 00:13:16.652 "digest": "sha512", 00:13:16.652 "dhgroup": "null" 00:13:16.652 } 00:13:16.652 } 00:13:16.652 ]' 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.652 04:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.910 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:17.478 04:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.736 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.737 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.737 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.994 00:13:17.995 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.995 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.995 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.253 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.253 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.253 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.253 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:18.511 { 00:13:18.511 "cntlid": 105, 00:13:18.511 "qid": 0, 00:13:18.511 "state": "enabled", 00:13:18.511 "thread": "nvmf_tgt_poll_group_000", 00:13:18.511 "listen_address": { 00:13:18.511 "trtype": "TCP", 00:13:18.511 "adrfam": "IPv4", 00:13:18.511 "traddr": "10.0.0.2", 00:13:18.511 "trsvcid": "4420" 00:13:18.511 }, 00:13:18.511 "peer_address": { 00:13:18.511 "trtype": "TCP", 00:13:18.511 "adrfam": "IPv4", 00:13:18.511 "traddr": "10.0.0.1", 00:13:18.511 "trsvcid": "33534" 00:13:18.511 }, 00:13:18.511 "auth": { 00:13:18.511 "state": "completed", 00:13:18.511 "digest": "sha512", 00:13:18.511 "dhgroup": "ffdhe2048" 00:13:18.511 } 00:13:18.511 } 00:13:18.511 ]' 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.511 04:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.769 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:13:19.337 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.337 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:19.337 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.337 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.337 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.337 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.337 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:19.337 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.595 04:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.854 00:13:19.854 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.854 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.854 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.129 { 00:13:20.129 "cntlid": 107, 00:13:20.129 "qid": 0, 00:13:20.129 "state": "enabled", 00:13:20.129 "thread": "nvmf_tgt_poll_group_000", 00:13:20.129 "listen_address": { 00:13:20.129 "trtype": "TCP", 00:13:20.129 "adrfam": "IPv4", 00:13:20.129 "traddr": "10.0.0.2", 00:13:20.129 "trsvcid": "4420" 00:13:20.129 }, 00:13:20.129 "peer_address": { 00:13:20.129 "trtype": "TCP", 00:13:20.129 "adrfam": "IPv4", 00:13:20.129 "traddr": "10.0.0.1", 00:13:20.129 "trsvcid": "38770" 00:13:20.129 }, 00:13:20.129 "auth": { 00:13:20.129 "state": "completed", 00:13:20.129 "digest": "sha512", 00:13:20.129 "dhgroup": "ffdhe2048" 00:13:20.129 } 00:13:20.129 } 00:13:20.129 ]' 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.129 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.387 04:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:13:20.953 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.953 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:20.953 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.953 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.953 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.953 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.953 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:20.953 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.212 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.470 00:13:21.729 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.729 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.729 04:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.729 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.729 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.729 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.729 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.729 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.729 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.729 { 00:13:21.729 "cntlid": 109, 00:13:21.729 "qid": 0, 00:13:21.729 "state": "enabled", 00:13:21.729 "thread": "nvmf_tgt_poll_group_000", 00:13:21.729 "listen_address": { 00:13:21.729 "trtype": "TCP", 00:13:21.729 "adrfam": "IPv4", 00:13:21.729 "traddr": "10.0.0.2", 00:13:21.729 "trsvcid": "4420" 00:13:21.729 }, 00:13:21.729 "peer_address": { 00:13:21.729 "trtype": "TCP", 00:13:21.729 "adrfam": "IPv4", 00:13:21.729 "traddr": "10.0.0.1", 00:13:21.729 "trsvcid": "38794" 00:13:21.729 }, 00:13:21.729 "auth": { 00:13:21.729 "state": "completed", 00:13:21.729 "digest": "sha512", 00:13:21.729 "dhgroup": "ffdhe2048" 00:13:21.729 } 00:13:21.729 } 00:13:21.729 ]' 00:13:21.729 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.987 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.987 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.987 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.987 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.987 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.987 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.987 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.246 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:13:22.811 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.811 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:22.811 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.811 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.811 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.811 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.811 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:22.811 04:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:23.085 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:23.357 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.357 { 00:13:23.357 "cntlid": 111, 00:13:23.357 "qid": 0, 00:13:23.357 "state": "enabled", 00:13:23.357 "thread": "nvmf_tgt_poll_group_000", 00:13:23.357 "listen_address": { 00:13:23.357 "trtype": "TCP", 00:13:23.357 "adrfam": "IPv4", 00:13:23.357 "traddr": "10.0.0.2", 00:13:23.357 "trsvcid": "4420" 00:13:23.357 }, 00:13:23.357 "peer_address": { 00:13:23.357 "trtype": "TCP", 00:13:23.357 "adrfam": "IPv4", 00:13:23.357 "traddr": "10.0.0.1", 00:13:23.357 "trsvcid": "38824" 00:13:23.357 }, 00:13:23.357 "auth": { 00:13:23.357 "state": "completed", 00:13:23.357 "digest": "sha512", 00:13:23.357 "dhgroup": "ffdhe2048" 00:13:23.357 } 00:13:23.357 } 00:13:23.357 ]' 00:13:23.357 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.615 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.615 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.615 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:23.615 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.615 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.615 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.615 04:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.873 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:13:24.439 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.439 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:24.439 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.439 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.439 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.440 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:24.440 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.440 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:24.440 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.698 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.957 00:13:24.957 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.957 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.957 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.215 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.215 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.215 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.215 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.215 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.215 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:25.215 { 00:13:25.215 "cntlid": 113, 00:13:25.215 "qid": 0, 00:13:25.215 "state": "enabled", 00:13:25.215 "thread": "nvmf_tgt_poll_group_000", 00:13:25.215 "listen_address": { 00:13:25.215 "trtype": "TCP", 00:13:25.215 "adrfam": "IPv4", 00:13:25.215 "traddr": "10.0.0.2", 00:13:25.215 "trsvcid": "4420" 00:13:25.215 }, 00:13:25.215 "peer_address": { 00:13:25.215 "trtype": "TCP", 00:13:25.215 "adrfam": "IPv4", 00:13:25.215 "traddr": "10.0.0.1", 00:13:25.215 "trsvcid": "38850" 00:13:25.215 }, 00:13:25.215 "auth": { 00:13:25.215 "state": "completed", 00:13:25.215 "digest": "sha512", 00:13:25.215 "dhgroup": "ffdhe3072" 00:13:25.215 } 00:13:25.215 } 00:13:25.215 ]' 00:13:25.215 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.474 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.474 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.474 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:25.474 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:25.474 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.474 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.474 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.732 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:13:26.300 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.300 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:26.300 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.300 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.300 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.300 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.300 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:26.300 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.558 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.817 00:13:26.817 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.817 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.817 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.075 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.076 { 00:13:27.076 "cntlid": 115, 00:13:27.076 "qid": 0, 00:13:27.076 "state": "enabled", 00:13:27.076 "thread": "nvmf_tgt_poll_group_000", 00:13:27.076 "listen_address": { 00:13:27.076 "trtype": "TCP", 00:13:27.076 "adrfam": "IPv4", 00:13:27.076 "traddr": "10.0.0.2", 00:13:27.076 "trsvcid": "4420" 00:13:27.076 }, 00:13:27.076 "peer_address": { 00:13:27.076 "trtype": "TCP", 00:13:27.076 "adrfam": "IPv4", 00:13:27.076 "traddr": "10.0.0.1", 00:13:27.076 "trsvcid": "38874" 00:13:27.076 }, 00:13:27.076 "auth": { 00:13:27.076 "state": "completed", 00:13:27.076 "digest": "sha512", 00:13:27.076 "dhgroup": "ffdhe3072" 00:13:27.076 } 00:13:27.076 } 00:13:27.076 ]' 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:27.076 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.334 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.334 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.334 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.592 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:13:28.163 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.163 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:28.163 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.163 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.163 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.163 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.163 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:28.163 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.422 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.681 00:13:28.681 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.681 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.681 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.939 { 00:13:28.939 "cntlid": 117, 00:13:28.939 "qid": 0, 00:13:28.939 "state": "enabled", 00:13:28.939 "thread": "nvmf_tgt_poll_group_000", 00:13:28.939 "listen_address": { 00:13:28.939 "trtype": "TCP", 00:13:28.939 "adrfam": "IPv4", 00:13:28.939 "traddr": "10.0.0.2", 00:13:28.939 "trsvcid": "4420" 00:13:28.939 }, 00:13:28.939 "peer_address": { 00:13:28.939 "trtype": "TCP", 00:13:28.939 "adrfam": "IPv4", 00:13:28.939 "traddr": "10.0.0.1", 00:13:28.939 "trsvcid": "56232" 00:13:28.939 }, 00:13:28.939 "auth": { 00:13:28.939 "state": "completed", 00:13:28.939 "digest": "sha512", 00:13:28.939 "dhgroup": "ffdhe3072" 00:13:28.939 } 00:13:28.939 } 00:13:28.939 ]' 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.939 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.197 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:29.197 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.197 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.197 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.197 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.456 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:13:30.030 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.031 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:30.031 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.031 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.031 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.031 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.031 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:30.031 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.295 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:30.553 00:13:30.553 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.553 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:30.553 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.812 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.812 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.812 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.812 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.812 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.812 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.812 { 00:13:30.812 "cntlid": 119, 00:13:30.812 "qid": 0, 00:13:30.812 "state": "enabled", 00:13:30.812 "thread": "nvmf_tgt_poll_group_000", 00:13:30.812 "listen_address": { 00:13:30.812 "trtype": "TCP", 00:13:30.812 "adrfam": "IPv4", 00:13:30.812 "traddr": "10.0.0.2", 00:13:30.812 "trsvcid": "4420" 00:13:30.812 }, 00:13:30.812 "peer_address": { 00:13:30.812 "trtype": "TCP", 00:13:30.812 "adrfam": "IPv4", 00:13:30.812 "traddr": "10.0.0.1", 00:13:30.812 "trsvcid": "56274" 00:13:30.812 }, 00:13:30.812 "auth": { 00:13:30.812 "state": "completed", 00:13:30.812 "digest": "sha512", 00:13:30.812 "dhgroup": "ffdhe3072" 00:13:30.812 } 00:13:30.812 } 00:13:30.812 ]' 00:13:30.812 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.070 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.070 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.070 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:31.070 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.071 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.071 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.071 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.329 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:31.896 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.155 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.416 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.675 { 00:13:32.675 "cntlid": 121, 00:13:32.675 "qid": 0, 00:13:32.675 "state": "enabled", 00:13:32.675 "thread": "nvmf_tgt_poll_group_000", 00:13:32.675 "listen_address": { 00:13:32.675 "trtype": "TCP", 00:13:32.675 "adrfam": "IPv4", 00:13:32.675 "traddr": "10.0.0.2", 00:13:32.675 "trsvcid": "4420" 00:13:32.675 }, 00:13:32.675 "peer_address": { 00:13:32.675 "trtype": "TCP", 00:13:32.675 "adrfam": "IPv4", 00:13:32.675 "traddr": "10.0.0.1", 00:13:32.675 "trsvcid": "56304" 00:13:32.675 }, 00:13:32.675 "auth": { 00:13:32.675 "state": "completed", 00:13:32.675 "digest": "sha512", 00:13:32.675 "dhgroup": "ffdhe4096" 00:13:32.675 } 00:13:32.675 } 00:13:32.675 ]' 00:13:32.675 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.934 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.934 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.934 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:32.934 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.934 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.934 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.934 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.193 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:13:33.761 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.761 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:33.761 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.761 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.761 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.761 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.761 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:33.761 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.761 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.329 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.329 { 00:13:34.329 "cntlid": 123, 00:13:34.329 "qid": 0, 00:13:34.329 "state": "enabled", 00:13:34.329 "thread": "nvmf_tgt_poll_group_000", 00:13:34.329 "listen_address": { 00:13:34.329 "trtype": "TCP", 00:13:34.329 "adrfam": "IPv4", 00:13:34.329 "traddr": "10.0.0.2", 00:13:34.329 "trsvcid": "4420" 00:13:34.329 }, 00:13:34.329 "peer_address": { 00:13:34.329 "trtype": "TCP", 00:13:34.329 "adrfam": "IPv4", 00:13:34.329 "traddr": "10.0.0.1", 00:13:34.329 "trsvcid": "56320" 00:13:34.329 }, 00:13:34.329 "auth": { 00:13:34.329 "state": "completed", 00:13:34.329 "digest": "sha512", 00:13:34.329 "dhgroup": "ffdhe4096" 00:13:34.329 } 00:13:34.329 } 00:13:34.329 ]' 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.329 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.588 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.588 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.588 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.588 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.588 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.847 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:13:35.415 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.415 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:35.415 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.415 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.415 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.415 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.415 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:35.415 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.674 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.933 00:13:35.933 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.933 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.933 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.193 { 00:13:36.193 "cntlid": 125, 00:13:36.193 "qid": 0, 00:13:36.193 "state": "enabled", 00:13:36.193 "thread": "nvmf_tgt_poll_group_000", 00:13:36.193 "listen_address": { 00:13:36.193 "trtype": "TCP", 00:13:36.193 "adrfam": "IPv4", 00:13:36.193 "traddr": "10.0.0.2", 00:13:36.193 "trsvcid": "4420" 00:13:36.193 }, 00:13:36.193 "peer_address": { 00:13:36.193 "trtype": "TCP", 00:13:36.193 "adrfam": "IPv4", 00:13:36.193 "traddr": "10.0.0.1", 00:13:36.193 "trsvcid": "56362" 00:13:36.193 }, 00:13:36.193 "auth": { 00:13:36.193 "state": "completed", 00:13:36.193 "digest": "sha512", 00:13:36.193 "dhgroup": "ffdhe4096" 00:13:36.193 } 00:13:36.193 } 00:13:36.193 ]' 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:36.193 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.451 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.451 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.451 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.451 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:13:37.019 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.019 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:37.019 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.019 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.019 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.019 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.019 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:37.019 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.278 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.537 00:13:37.795 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.795 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.795 04:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.054 { 00:13:38.054 "cntlid": 127, 00:13:38.054 "qid": 0, 00:13:38.054 "state": "enabled", 00:13:38.054 "thread": "nvmf_tgt_poll_group_000", 00:13:38.054 "listen_address": { 00:13:38.054 "trtype": "TCP", 00:13:38.054 "adrfam": "IPv4", 00:13:38.054 "traddr": "10.0.0.2", 00:13:38.054 "trsvcid": "4420" 00:13:38.054 }, 00:13:38.054 "peer_address": { 00:13:38.054 "trtype": "TCP", 00:13:38.054 "adrfam": "IPv4", 00:13:38.054 "traddr": "10.0.0.1", 00:13:38.054 "trsvcid": "56396" 00:13:38.054 }, 00:13:38.054 "auth": { 00:13:38.054 "state": "completed", 00:13:38.054 "digest": "sha512", 00:13:38.054 "dhgroup": "ffdhe4096" 00:13:38.054 } 00:13:38.054 } 00:13:38.054 ]' 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.054 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.311 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:38.878 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.137 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.396 00:13:39.655 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.655 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.655 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.914 { 00:13:39.914 "cntlid": 129, 00:13:39.914 "qid": 0, 00:13:39.914 "state": "enabled", 00:13:39.914 "thread": "nvmf_tgt_poll_group_000", 00:13:39.914 "listen_address": { 00:13:39.914 "trtype": "TCP", 00:13:39.914 "adrfam": "IPv4", 00:13:39.914 "traddr": "10.0.0.2", 00:13:39.914 "trsvcid": "4420" 00:13:39.914 }, 00:13:39.914 "peer_address": { 00:13:39.914 "trtype": "TCP", 00:13:39.914 "adrfam": "IPv4", 00:13:39.914 "traddr": "10.0.0.1", 00:13:39.914 "trsvcid": "55402" 00:13:39.914 }, 00:13:39.914 "auth": { 00:13:39.914 "state": "completed", 00:13:39.914 "digest": "sha512", 00:13:39.914 "dhgroup": "ffdhe6144" 00:13:39.914 } 00:13:39.914 } 00:13:39.914 ]' 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.914 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.171 04:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.108 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.675 00:13:41.675 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.675 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.675 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.934 { 00:13:41.934 "cntlid": 131, 00:13:41.934 "qid": 0, 00:13:41.934 "state": "enabled", 00:13:41.934 "thread": "nvmf_tgt_poll_group_000", 00:13:41.934 "listen_address": { 00:13:41.934 "trtype": "TCP", 00:13:41.934 "adrfam": "IPv4", 00:13:41.934 "traddr": "10.0.0.2", 00:13:41.934 "trsvcid": "4420" 00:13:41.934 }, 00:13:41.934 "peer_address": { 00:13:41.934 "trtype": "TCP", 00:13:41.934 "adrfam": "IPv4", 00:13:41.934 "traddr": "10.0.0.1", 00:13:41.934 "trsvcid": "55408" 00:13:41.934 }, 00:13:41.934 "auth": { 00:13:41.934 "state": "completed", 00:13:41.934 "digest": "sha512", 00:13:41.934 "dhgroup": "ffdhe6144" 00:13:41.934 } 00:13:41.934 } 00:13:41.934 ]' 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.934 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.193 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:13:42.761 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.761 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:42.761 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.761 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.761 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.761 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.761 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:42.761 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.019 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.585 00:13:43.585 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.585 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.585 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.585 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.585 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.585 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.585 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.585 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.842 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.842 { 00:13:43.842 "cntlid": 133, 00:13:43.842 "qid": 0, 00:13:43.842 "state": "enabled", 00:13:43.842 "thread": "nvmf_tgt_poll_group_000", 00:13:43.842 "listen_address": { 00:13:43.842 "trtype": "TCP", 00:13:43.842 "adrfam": "IPv4", 00:13:43.842 "traddr": "10.0.0.2", 00:13:43.842 "trsvcid": "4420" 00:13:43.842 }, 00:13:43.842 "peer_address": { 00:13:43.842 "trtype": "TCP", 00:13:43.842 "adrfam": "IPv4", 00:13:43.842 "traddr": "10.0.0.1", 00:13:43.842 "trsvcid": "55438" 00:13:43.842 }, 00:13:43.842 "auth": { 00:13:43.842 "state": "completed", 00:13:43.842 "digest": "sha512", 00:13:43.842 "dhgroup": "ffdhe6144" 00:13:43.842 } 00:13:43.842 } 00:13:43.842 ]' 00:13:43.842 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.842 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.842 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.842 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:43.842 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.842 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.842 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.842 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.100 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:13:44.667 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.667 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:44.667 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.667 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.667 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.668 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:44.668 04:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.926 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.493 00:13:45.493 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.493 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.493 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.493 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.493 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.493 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.493 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.751 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.751 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.751 { 00:13:45.751 "cntlid": 135, 00:13:45.751 "qid": 0, 00:13:45.751 "state": "enabled", 00:13:45.751 "thread": "nvmf_tgt_poll_group_000", 00:13:45.751 "listen_address": { 00:13:45.751 "trtype": "TCP", 00:13:45.751 "adrfam": "IPv4", 00:13:45.751 "traddr": "10.0.0.2", 00:13:45.751 "trsvcid": "4420" 00:13:45.751 }, 00:13:45.751 "peer_address": { 00:13:45.752 "trtype": "TCP", 00:13:45.752 "adrfam": "IPv4", 00:13:45.752 "traddr": "10.0.0.1", 00:13:45.752 "trsvcid": "55470" 00:13:45.752 }, 00:13:45.752 "auth": { 00:13:45.752 "state": "completed", 00:13:45.752 "digest": "sha512", 00:13:45.752 "dhgroup": "ffdhe6144" 00:13:45.752 } 00:13:45.752 } 00:13:45.752 ]' 00:13:45.752 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.752 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.752 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.752 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:45.752 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.752 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.752 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.752 04:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.010 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:46.576 04:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.835 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.400 00:13:47.400 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.400 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.400 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.659 { 00:13:47.659 "cntlid": 137, 00:13:47.659 "qid": 0, 00:13:47.659 "state": "enabled", 00:13:47.659 "thread": "nvmf_tgt_poll_group_000", 00:13:47.659 "listen_address": { 00:13:47.659 "trtype": "TCP", 00:13:47.659 "adrfam": "IPv4", 00:13:47.659 "traddr": "10.0.0.2", 00:13:47.659 "trsvcid": "4420" 00:13:47.659 }, 00:13:47.659 "peer_address": { 00:13:47.659 "trtype": "TCP", 00:13:47.659 "adrfam": "IPv4", 00:13:47.659 "traddr": "10.0.0.1", 00:13:47.659 "trsvcid": "55516" 00:13:47.659 }, 00:13:47.659 "auth": { 00:13:47.659 "state": "completed", 00:13:47.659 "digest": "sha512", 00:13:47.659 "dhgroup": "ffdhe8192" 00:13:47.659 } 00:13:47.659 } 00:13:47.659 ]' 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.659 04:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.917 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:13:48.505 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.505 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:48.505 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.505 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.764 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.764 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.764 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:48.764 04:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.764 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.331 00:13:49.331 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.331 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.331 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.591 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.591 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.591 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.591 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.591 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.591 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.591 { 00:13:49.591 "cntlid": 139, 00:13:49.591 "qid": 0, 00:13:49.591 "state": "enabled", 00:13:49.591 "thread": "nvmf_tgt_poll_group_000", 00:13:49.591 "listen_address": { 00:13:49.591 "trtype": "TCP", 00:13:49.591 "adrfam": "IPv4", 00:13:49.591 "traddr": "10.0.0.2", 00:13:49.591 "trsvcid": "4420" 00:13:49.591 }, 00:13:49.591 "peer_address": { 00:13:49.591 "trtype": "TCP", 00:13:49.591 "adrfam": "IPv4", 00:13:49.591 "traddr": "10.0.0.1", 00:13:49.591 "trsvcid": "47136" 00:13:49.591 }, 00:13:49.591 "auth": { 00:13:49.591 "state": "completed", 00:13:49.591 "digest": "sha512", 00:13:49.591 "dhgroup": "ffdhe8192" 00:13:49.591 } 00:13:49.591 } 00:13:49.591 ]' 00:13:49.591 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.849 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.849 04:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.849 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.849 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.849 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.849 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.849 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.118 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:01:ZTVjZWQ2NmVlYTBiZDM2OWRkZGQ0ZjUxNzg4ZjA1MTEnqGx+: --dhchap-ctrl-secret DHHC-1:02:ZDBlMGU3MTEzMjhkMjFhOWFjNTVkZmU3M2UwYzA0NDBiYjg0ZGMxNGQxZDM4MGRm1EFp0A==: 00:13:50.686 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.686 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:50.686 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.686 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.686 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.686 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.686 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:50.686 04:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.945 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.513 00:13:51.513 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.513 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.513 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.772 { 00:13:51.772 "cntlid": 141, 00:13:51.772 "qid": 0, 00:13:51.772 "state": "enabled", 00:13:51.772 "thread": "nvmf_tgt_poll_group_000", 00:13:51.772 "listen_address": { 00:13:51.772 "trtype": "TCP", 00:13:51.772 "adrfam": "IPv4", 00:13:51.772 "traddr": "10.0.0.2", 00:13:51.772 "trsvcid": "4420" 00:13:51.772 }, 00:13:51.772 "peer_address": { 00:13:51.772 "trtype": "TCP", 00:13:51.772 "adrfam": "IPv4", 00:13:51.772 "traddr": "10.0.0.1", 00:13:51.772 "trsvcid": "47160" 00:13:51.772 }, 00:13:51.772 "auth": { 00:13:51.772 "state": "completed", 00:13:51.772 "digest": "sha512", 00:13:51.772 "dhgroup": "ffdhe8192" 00:13:51.772 } 00:13:51.772 } 00:13:51.772 ]' 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.772 04:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.772 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:51.772 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.772 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.772 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.772 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.030 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:02:MjJhZGUzY2IwNGEwZWIwMDdmZDRmOGE1Y2IxNzI2MjRhYzk5OWY0OGQ2M2UxYWU5Jyv+hw==: --dhchap-ctrl-secret DHHC-1:01:ZGM4MThjNTQ3MDk3ZjMxYWY0MjhmZjFmZDI4YzQyMGS8rnJw: 00:13:52.598 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.598 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:52.598 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.598 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.598 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.598 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.598 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:52.598 04:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.857 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:53.425 00:13:53.425 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.425 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.425 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.684 { 00:13:53.684 "cntlid": 143, 00:13:53.684 "qid": 0, 00:13:53.684 "state": "enabled", 00:13:53.684 "thread": "nvmf_tgt_poll_group_000", 00:13:53.684 "listen_address": { 00:13:53.684 "trtype": "TCP", 00:13:53.684 "adrfam": "IPv4", 00:13:53.684 "traddr": "10.0.0.2", 00:13:53.684 "trsvcid": "4420" 00:13:53.684 }, 00:13:53.684 "peer_address": { 00:13:53.684 "trtype": "TCP", 00:13:53.684 "adrfam": "IPv4", 00:13:53.684 "traddr": "10.0.0.1", 00:13:53.684 "trsvcid": "47180" 00:13:53.684 }, 00:13:53.684 "auth": { 00:13:53.684 "state": "completed", 00:13:53.684 "digest": "sha512", 00:13:53.684 "dhgroup": "ffdhe8192" 00:13:53.684 } 00:13:53.684 } 00:13:53.684 ]' 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:53.684 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.943 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.943 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.943 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.943 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:54.510 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.769 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.337 00:13:55.337 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.337 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.337 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.612 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.612 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.613 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.613 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.613 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.613 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.613 { 00:13:55.613 "cntlid": 145, 00:13:55.613 "qid": 0, 00:13:55.613 "state": "enabled", 00:13:55.613 "thread": "nvmf_tgt_poll_group_000", 00:13:55.613 "listen_address": { 00:13:55.613 "trtype": "TCP", 00:13:55.613 "adrfam": "IPv4", 00:13:55.613 "traddr": "10.0.0.2", 00:13:55.613 "trsvcid": "4420" 00:13:55.613 }, 00:13:55.613 "peer_address": { 00:13:55.613 "trtype": "TCP", 00:13:55.613 "adrfam": "IPv4", 00:13:55.613 "traddr": "10.0.0.1", 00:13:55.613 "trsvcid": "47208" 00:13:55.613 }, 00:13:55.613 "auth": { 00:13:55.613 "state": "completed", 00:13:55.613 "digest": "sha512", 00:13:55.613 "dhgroup": "ffdhe8192" 00:13:55.613 } 00:13:55.613 } 00:13:55.613 ]' 00:13:55.613 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.613 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.613 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.613 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:55.882 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.882 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.882 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.882 04:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.882 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:00:YmFiOTU3ZDRmMWNhZTI1ZDk0MWZlMjY0OTE1OTNkNjkzNmY0NzZlZWZjYzA4MzVit2OTww==: --dhchap-ctrl-secret DHHC-1:03:ZjY3MDIyMGRlNDhkMWYyNTIxODUzNzNiOTNmZTdlNGRhMjMyOGRlMzBjNTdiZDcwYzdjNTZhMmM1MmFjNDNlNqhiwL0=: 00:13:56.818 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.818 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:56.818 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.818 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.818 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.818 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:56.819 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:57.077 request: 00:13:57.077 { 00:13:57.077 "name": "nvme0", 00:13:57.077 "trtype": "tcp", 00:13:57.077 "traddr": "10.0.0.2", 00:13:57.077 "adrfam": "ipv4", 00:13:57.077 "trsvcid": "4420", 00:13:57.077 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:57.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274", 00:13:57.077 "prchk_reftag": false, 00:13:57.078 "prchk_guard": false, 00:13:57.078 "hdgst": false, 00:13:57.078 "ddgst": false, 00:13:57.078 "dhchap_key": "key2", 00:13:57.078 "method": "bdev_nvme_attach_controller", 00:13:57.078 "req_id": 1 00:13:57.078 } 00:13:57.078 Got JSON-RPC error response 00:13:57.078 response: 00:13:57.078 { 00:13:57.078 "code": -5, 00:13:57.078 "message": "Input/output error" 00:13:57.078 } 00:13:57.078 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:57.078 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:57.078 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:57.078 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:57.078 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:57.078 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.078 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.336 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:57.337 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:57.904 request: 00:13:57.904 { 00:13:57.904 "name": "nvme0", 00:13:57.904 "trtype": "tcp", 00:13:57.904 "traddr": "10.0.0.2", 00:13:57.904 "adrfam": "ipv4", 00:13:57.904 "trsvcid": "4420", 00:13:57.904 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:57.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274", 00:13:57.904 "prchk_reftag": false, 00:13:57.904 "prchk_guard": false, 00:13:57.904 "hdgst": false, 00:13:57.904 "ddgst": false, 00:13:57.904 "dhchap_key": "key1", 00:13:57.904 "dhchap_ctrlr_key": "ckey2", 00:13:57.904 "method": "bdev_nvme_attach_controller", 00:13:57.904 "req_id": 1 00:13:57.904 } 00:13:57.904 Got JSON-RPC error response 00:13:57.904 response: 00:13:57.904 { 00:13:57.904 "code": -5, 00:13:57.904 "message": "Input/output error" 00:13:57.904 } 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.904 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key1 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:57.905 04:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.905 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.905 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.164 request: 00:13:58.164 { 00:13:58.164 "name": "nvme0", 00:13:58.164 "trtype": "tcp", 00:13:58.164 "traddr": "10.0.0.2", 00:13:58.164 "adrfam": "ipv4", 00:13:58.164 "trsvcid": "4420", 00:13:58.164 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:58.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274", 00:13:58.164 "prchk_reftag": false, 00:13:58.164 "prchk_guard": false, 00:13:58.164 "hdgst": false, 00:13:58.164 "ddgst": false, 00:13:58.164 "dhchap_key": "key1", 00:13:58.164 "dhchap_ctrlr_key": "ckey1", 00:13:58.164 "method": "bdev_nvme_attach_controller", 00:13:58.164 "req_id": 1 00:13:58.164 } 00:13:58.164 Got JSON-RPC error response 00:13:58.164 response: 00:13:58.164 { 00:13:58.164 "code": -5, 00:13:58.164 "message": "Input/output error" 00:13:58.164 } 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 82935 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82935 ']' 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82935 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82935 00:13:58.423 killing process with pid 82935 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82935' 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82935 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82935 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=85795 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 85795 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 85795 ']' 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.423 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 85795 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 85795 ']' 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.798 04:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.798 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.798 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:59.798 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:59.798 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.798 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.061 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.626 00:14:00.626 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.626 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.626 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.883 { 00:14:00.883 "cntlid": 1, 00:14:00.883 "qid": 0, 00:14:00.883 "state": "enabled", 00:14:00.883 "thread": "nvmf_tgt_poll_group_000", 00:14:00.883 "listen_address": { 00:14:00.883 "trtype": "TCP", 00:14:00.883 "adrfam": "IPv4", 00:14:00.883 "traddr": "10.0.0.2", 00:14:00.883 "trsvcid": "4420" 00:14:00.883 }, 00:14:00.883 "peer_address": { 00:14:00.883 "trtype": "TCP", 00:14:00.883 "adrfam": "IPv4", 00:14:00.883 "traddr": "10.0.0.1", 00:14:00.883 "trsvcid": "51070" 00:14:00.883 }, 00:14:00.883 "auth": { 00:14:00.883 "state": "completed", 00:14:00.883 "digest": "sha512", 00:14:00.883 "dhgroup": "ffdhe8192" 00:14:00.883 } 00:14:00.883 } 00:14:00.883 ]' 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.883 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.142 04:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-secret DHHC-1:03:MGE4NWNlMTU3NzZhYTdhYWI3Mjk1YjUwOGM1MTFhYjNjYzFjYmE1NzYwMmFhYTE0NWI4NTcxZGE1ODlkZmU5MnhpM3U=: 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --dhchap-key key3 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.077 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.335 request: 00:14:02.335 { 00:14:02.335 "name": "nvme0", 00:14:02.335 "trtype": "tcp", 00:14:02.335 "traddr": "10.0.0.2", 00:14:02.335 "adrfam": "ipv4", 00:14:02.335 "trsvcid": "4420", 00:14:02.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:02.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274", 00:14:02.335 "prchk_reftag": false, 00:14:02.335 "prchk_guard": false, 00:14:02.335 "hdgst": false, 00:14:02.335 "ddgst": false, 00:14:02.335 "dhchap_key": "key3", 00:14:02.335 "method": "bdev_nvme_attach_controller", 00:14:02.335 "req_id": 1 00:14:02.335 } 00:14:02.335 Got JSON-RPC error response 00:14:02.335 response: 00:14:02.335 { 00:14:02.335 "code": -5, 00:14:02.335 "message": "Input/output error" 00:14:02.335 } 00:14:02.335 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:02.335 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:02.335 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:02.335 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:02.335 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:02.335 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:02.335 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:02.335 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.594 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.851 request: 00:14:02.851 { 00:14:02.851 "name": "nvme0", 00:14:02.851 "trtype": "tcp", 00:14:02.851 "traddr": "10.0.0.2", 00:14:02.851 "adrfam": "ipv4", 00:14:02.851 "trsvcid": "4420", 00:14:02.851 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:02.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274", 00:14:02.851 "prchk_reftag": false, 00:14:02.851 "prchk_guard": false, 00:14:02.851 "hdgst": false, 00:14:02.851 "ddgst": false, 00:14:02.851 "dhchap_key": "key3", 00:14:02.851 "method": "bdev_nvme_attach_controller", 00:14:02.851 "req_id": 1 00:14:02.851 } 00:14:02.851 Got JSON-RPC error response 00:14:02.852 response: 00:14:02.852 { 00:14:02.852 "code": -5, 00:14:02.852 "message": "Input/output error" 00:14:02.852 } 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:02.852 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:03.111 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:14:03.111 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.111 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:03.112 request: 00:14:03.112 { 00:14:03.112 "name": "nvme0", 00:14:03.112 "trtype": "tcp", 00:14:03.112 "traddr": "10.0.0.2", 00:14:03.112 "adrfam": "ipv4", 00:14:03.112 "trsvcid": "4420", 00:14:03.112 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:03.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274", 00:14:03.112 "prchk_reftag": false, 00:14:03.112 "prchk_guard": false, 00:14:03.112 "hdgst": false, 00:14:03.112 "ddgst": false, 00:14:03.112 "dhchap_key": "key0", 00:14:03.112 "dhchap_ctrlr_key": "key1", 00:14:03.112 "method": "bdev_nvme_attach_controller", 00:14:03.112 "req_id": 1 00:14:03.112 } 00:14:03.112 Got JSON-RPC error response 00:14:03.112 response: 00:14:03.112 { 00:14:03.112 "code": -5, 00:14:03.112 "message": "Input/output error" 00:14:03.112 } 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:03.112 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:03.678 00:14:03.678 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:03.678 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.678 04:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:03.678 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.678 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.678 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 82954 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82954 ']' 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82954 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82954 00:14:03.936 killing process with pid 82954 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82954' 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82954 00:14:03.936 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82954 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.502 rmmod nvme_tcp 00:14:04.502 rmmod nvme_fabrics 00:14:04.502 rmmod nvme_keyring 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 85795 ']' 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 85795 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 85795 ']' 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 85795 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85795 00:14:04.502 killing process with pid 85795 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85795' 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 85795 00:14:04.502 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 85795 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ibs /tmp/spdk.key-sha256.zva /tmp/spdk.key-sha384.GY8 /tmp/spdk.key-sha512.mlE /tmp/spdk.key-sha512.pmB /tmp/spdk.key-sha384.EV4 /tmp/spdk.key-sha256.Lc5 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:04.761 00:14:04.761 real 2m31.359s 00:14:04.761 user 6m2.844s 00:14:04.761 sys 0m23.966s 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.761 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.761 ************************************ 00:14:04.761 END TEST nvmf_auth_target 00:14:04.761 ************************************ 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.761 ************************************ 00:14:04.761 START TEST nvmf_bdevio_no_huge 00:14:04.761 ************************************ 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:04.761 * Looking for test storage... 00:14:04.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.761 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.019 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:05.020 Cannot find device "nvmf_tgt_br" 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.020 Cannot find device "nvmf_tgt_br2" 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:05.020 Cannot find device "nvmf_tgt_br" 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:05.020 Cannot find device "nvmf_tgt_br2" 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:05.020 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:05.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:14:05.278 00:14:05.278 --- 10.0.0.2 ping statistics --- 00:14:05.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.278 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:05.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:05.278 00:14:05.278 --- 10.0.0.3 ping statistics --- 00:14:05.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.278 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:05.278 00:14:05.278 --- 10.0.0.1 ping statistics --- 00:14:05.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.278 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=86101 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 86101 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 86101 ']' 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.278 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:05.278 [2024-07-23 04:09:58.532492] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:05.279 [2024-07-23 04:09:58.532576] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:05.536 [2024-07-23 04:09:58.677582] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:05.536 [2024-07-23 04:09:58.680415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.536 [2024-07-23 04:09:58.792043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.536 [2024-07-23 04:09:58.792100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.536 [2024-07-23 04:09:58.792115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.536 [2024-07-23 04:09:58.792126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.536 [2024-07-23 04:09:58.792136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.536 [2024-07-23 04:09:58.792296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:05.536 [2024-07-23 04:09:58.792790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:05.536 [2024-07-23 04:09:58.793183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:05.536 [2024-07-23 04:09:58.793215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.536 [2024-07-23 04:09:58.799076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 [2024-07-23 04:09:59.570793] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 Malloc0 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 [2024-07-23 04:09:59.611049] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.471 { 00:14:06.471 "params": { 00:14:06.471 "name": "Nvme$subsystem", 00:14:06.471 "trtype": "$TEST_TRANSPORT", 00:14:06.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.471 "adrfam": "ipv4", 00:14:06.471 "trsvcid": "$NVMF_PORT", 00:14:06.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.471 "hdgst": ${hdgst:-false}, 00:14:06.471 "ddgst": ${ddgst:-false} 00:14:06.471 }, 00:14:06.471 "method": "bdev_nvme_attach_controller" 00:14:06.471 } 00:14:06.471 EOF 00:14:06.471 )") 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:06.471 04:09:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.471 "params": { 00:14:06.471 "name": "Nvme1", 00:14:06.471 "trtype": "tcp", 00:14:06.471 "traddr": "10.0.0.2", 00:14:06.471 "adrfam": "ipv4", 00:14:06.471 "trsvcid": "4420", 00:14:06.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.471 "hdgst": false, 00:14:06.471 "ddgst": false 00:14:06.471 }, 00:14:06.471 "method": "bdev_nvme_attach_controller" 00:14:06.471 }' 00:14:06.471 [2024-07-23 04:09:59.669470] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:06.471 [2024-07-23 04:09:59.669575] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid86137 ] 00:14:06.471 [2024-07-23 04:09:59.809661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:06.471 [2024-07-23 04:09:59.812268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.730 [2024-07-23 04:09:59.928670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.730 [2024-07-23 04:09:59.928728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.730 [2024-07-23 04:09:59.928733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.730 [2024-07-23 04:09:59.943529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.988 I/O targets: 00:14:06.988 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:06.988 00:14:06.988 00:14:06.988 CUnit - A unit testing framework for C - Version 2.1-3 00:14:06.988 http://cunit.sourceforge.net/ 00:14:06.988 00:14:06.988 00:14:06.988 Suite: bdevio tests on: Nvme1n1 00:14:06.988 Test: blockdev write read block ...passed 00:14:06.988 Test: blockdev write zeroes read block ...passed 00:14:06.988 Test: blockdev write zeroes read no split ...passed 00:14:06.988 Test: blockdev write zeroes read split ...passed 00:14:06.988 Test: blockdev write zeroes read split partial ...passed 00:14:06.988 Test: blockdev reset ...[2024-07-23 04:10:00.135259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:06.988 [2024-07-23 04:10:00.135392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce2ad0 (9): Bad file descriptor 00:14:06.988 [2024-07-23 04:10:00.153598] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:06.988 passed 00:14:06.988 Test: blockdev write read 8 blocks ...passed 00:14:06.988 Test: blockdev write read size > 128k ...passed 00:14:06.988 Test: blockdev write read invalid size ...passed 00:14:06.988 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:06.988 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:06.988 Test: blockdev write read max offset ...passed 00:14:06.988 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:06.988 Test: blockdev writev readv 8 blocks ...passed 00:14:06.988 Test: blockdev writev readv 30 x 1block ...passed 00:14:06.988 Test: blockdev writev readv block ...passed 00:14:06.988 Test: blockdev writev readv size > 128k ...passed 00:14:06.988 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:06.988 Test: blockdev comparev and writev ...[2024-07-23 04:10:00.165393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.988 [2024-07-23 04:10:00.165454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.165481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.988 [2024-07-23 04:10:00.165495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.165970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.988 [2024-07-23 04:10:00.166007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.166030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.988 [2024-07-23 04:10:00.166043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.166440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.988 [2024-07-23 04:10:00.166543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.166568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.988 [2024-07-23 04:10:00.166580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.167274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.988 [2024-07-23 04:10:00.167345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.167368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.988 [2024-07-23 04:10:00.167380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:06.988 passed 00:14:06.988 Test: blockdev nvme passthru rw ...passed 00:14:06.988 Test: blockdev nvme passthru vendor specific ...[2024-07-23 04:10:00.168794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.988 [2024-07-23 04:10:00.168826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.169316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.988 [2024-07-23 04:10:00.169362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.169632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.988 [2024-07-23 04:10:00.169800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:06.988 [2024-07-23 04:10:00.170376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.988 [2024-07-23 04:10:00.170411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:06.988 passed 00:14:06.988 Test: blockdev nvme admin passthru ...passed 00:14:06.988 Test: blockdev copy ...passed 00:14:06.988 00:14:06.988 Run Summary: Type Total Ran Passed Failed Inactive 00:14:06.988 suites 1 1 n/a 0 0 00:14:06.988 tests 23 23 23 0 0 00:14:06.988 asserts 152 152 152 0 n/a 00:14:06.988 00:14:06.988 Elapsed time = 0.175 seconds 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.246 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.246 rmmod nvme_tcp 00:14:07.504 rmmod nvme_fabrics 00:14:07.504 rmmod nvme_keyring 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 86101 ']' 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 86101 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 86101 ']' 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 86101 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86101 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:07.504 killing process with pid 86101 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86101' 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 86101 00:14:07.504 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 86101 00:14:07.762 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.762 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.762 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.762 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.762 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.762 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.762 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.762 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:07.762 00:14:07.762 real 0m3.022s 00:14:07.762 user 0m10.054s 00:14:07.762 sys 0m1.201s 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.762 ************************************ 00:14:07.762 END TEST nvmf_bdevio_no_huge 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.762 ************************************ 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.762 ************************************ 00:14:07.762 START TEST nvmf_tls 00:14:07.762 ************************************ 00:14:07.762 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:08.022 * Looking for test storage... 00:14:08.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:08.022 Cannot find device "nvmf_tgt_br" 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.022 Cannot find device "nvmf_tgt_br2" 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:08.022 Cannot find device "nvmf_tgt_br" 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:08.022 Cannot find device "nvmf_tgt_br2" 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:08.022 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.280 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:08.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:08.281 00:14:08.281 --- 10.0.0.2 ping statistics --- 00:14:08.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.281 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:08.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:08.281 00:14:08.281 --- 10.0.0.3 ping statistics --- 00:14:08.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.281 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:08.281 00:14:08.281 --- 10.0.0.1 ping statistics --- 00:14:08.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.281 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86313 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86313 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86313 ']' 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.281 04:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.281 [2024-07-23 04:10:01.531432] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:08.281 [2024-07-23 04:10:01.531525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.539 [2024-07-23 04:10:01.651187] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:08.539 [2024-07-23 04:10:01.670309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.539 [2024-07-23 04:10:01.738958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.539 [2024-07-23 04:10:01.739046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.539 [2024-07-23 04:10:01.739061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.539 [2024-07-23 04:10:01.739072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.539 [2024-07-23 04:10:01.739081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.539 [2024-07-23 04:10:01.739121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.106 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.106 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:09.106 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.106 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.106 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.364 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.364 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:09.364 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:09.364 true 00:14:09.364 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:09.364 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:09.621 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:09.621 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:09.621 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:09.879 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:09.879 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:10.138 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:10.138 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:10.138 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:10.396 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:10.396 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:10.396 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:10.396 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:10.654 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:10.654 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:10.912 04:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:10.912 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:10.912 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:10.912 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:10.912 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:11.170 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:11.171 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:11.171 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:11.429 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:11.429 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:11.429 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:11.429 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.aXLA6hudFP 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.OghtzQ7Fsg 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.aXLA6hudFP 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.OghtzQ7Fsg 00:14:11.688 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:11.946 04:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:12.205 [2024-07-23 04:10:05.446277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.205 04:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.aXLA6hudFP 00:14:12.205 04:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aXLA6hudFP 00:14:12.205 04:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:12.463 [2024-07-23 04:10:05.673961] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.463 04:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:12.722 04:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:12.722 [2024-07-23 04:10:06.058006] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:12.722 [2024-07-23 04:10:06.058191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.980 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:12.980 malloc0 00:14:12.980 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:13.239 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aXLA6hudFP 00:14:13.498 [2024-07-23 04:10:06.664408] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:13.498 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aXLA6hudFP 00:14:25.703 Initializing NVMe Controllers 00:14:25.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:25.703 Initialization complete. Launching workers. 00:14:25.703 ======================================================== 00:14:25.703 Latency(us) 00:14:25.703 Device Information : IOPS MiB/s Average min max 00:14:25.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11441.07 44.69 5594.87 1544.87 7850.60 00:14:25.703 ======================================================== 00:14:25.703 Total : 11441.07 44.69 5594.87 1544.87 7850.60 00:14:25.703 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aXLA6hudFP 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aXLA6hudFP' 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86538 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86538 /var/tmp/bdevperf.sock 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86538 ']' 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.703 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.703 [2024-07-23 04:10:16.919338] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:25.703 [2024-07-23 04:10:16.919456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86538 ] 00:14:25.703 [2024-07-23 04:10:17.041819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:25.703 [2024-07-23 04:10:17.063999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.703 [2024-07-23 04:10:17.140085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.703 [2024-07-23 04:10:17.198373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:25.703 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.703 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:25.703 04:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aXLA6hudFP 00:14:25.703 [2024-07-23 04:10:18.033395] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:25.703 [2024-07-23 04:10:18.033519] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:25.703 TLSTESTn1 00:14:25.703 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:25.703 Running I/O for 10 seconds... 00:14:35.681 00:14:35.681 Latency(us) 00:14:35.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.681 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:35.681 Verification LBA range: start 0x0 length 0x2000 00:14:35.681 TLSTESTn1 : 10.02 4681.54 18.29 0.00 0.00 27282.96 6106.76 18945.86 00:14:35.681 =================================================================================================================== 00:14:35.681 Total : 4681.54 18.29 0.00 0.00 27282.96 6106.76 18945.86 00:14:35.681 0 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 86538 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86538 ']' 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86538 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86538 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:35.681 killing process with pid 86538 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86538' 00:14:35.681 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86538 00:14:35.681 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.681 00:14:35.682 Latency(us) 00:14:35.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.682 =================================================================================================================== 00:14:35.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86538 00:14:35.682 [2024-07-23 04:10:28.306026] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OghtzQ7Fsg 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OghtzQ7Fsg 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OghtzQ7Fsg 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OghtzQ7Fsg' 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86676 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86676 /var/tmp/bdevperf.sock 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86676 ']' 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.682 [2024-07-23 04:10:28.555086] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:35.682 [2024-07-23 04:10:28.555175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86676 ] 00:14:35.682 [2024-07-23 04:10:28.676894] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:35.682 [2024-07-23 04:10:28.692124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.682 [2024-07-23 04:10:28.748902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.682 [2024-07-23 04:10:28.799518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:35.682 04:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OghtzQ7Fsg 00:14:35.941 [2024-07-23 04:10:29.074778] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:35.941 [2024-07-23 04:10:29.074887] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:35.941 [2024-07-23 04:10:29.080158] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:35.941 [2024-07-23 04:10:29.080439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1421f00 (107): Transport endpoint is not connected 00:14:35.941 [2024-07-23 04:10:29.081413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1421f00 (9): Bad file descriptor 00:14:35.941 [2024-07-23 04:10:29.082410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:35.941 [2024-07-23 04:10:29.082447] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:35.941 [2024-07-23 04:10:29.082462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:35.941 request: 00:14:35.941 { 00:14:35.941 "name": "TLSTEST", 00:14:35.941 "trtype": "tcp", 00:14:35.941 "traddr": "10.0.0.2", 00:14:35.941 "adrfam": "ipv4", 00:14:35.941 "trsvcid": "4420", 00:14:35.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.941 "prchk_reftag": false, 00:14:35.941 "prchk_guard": false, 00:14:35.941 "hdgst": false, 00:14:35.941 "ddgst": false, 00:14:35.941 "psk": "/tmp/tmp.OghtzQ7Fsg", 00:14:35.941 "method": "bdev_nvme_attach_controller", 00:14:35.941 "req_id": 1 00:14:35.941 } 00:14:35.941 Got JSON-RPC error response 00:14:35.941 response: 00:14:35.941 { 00:14:35.941 "code": -5, 00:14:35.941 "message": "Input/output error" 00:14:35.941 } 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 86676 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86676 ']' 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86676 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86676 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:35.941 killing process with pid 86676 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86676' 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86676 00:14:35.941 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.941 00:14:35.941 Latency(us) 00:14:35.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.941 =================================================================================================================== 00:14:35.941 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:35.941 [2024-07-23 04:10:29.122662] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:35.941 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86676 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aXLA6hudFP 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aXLA6hudFP 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:36.200 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aXLA6hudFP 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aXLA6hudFP' 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86692 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86692 /var/tmp/bdevperf.sock 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86692 ']' 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.201 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.201 [2024-07-23 04:10:29.360408] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:36.201 [2024-07-23 04:10:29.360503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86692 ] 00:14:36.201 [2024-07-23 04:10:29.482409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:36.201 [2024-07-23 04:10:29.499569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.459 [2024-07-23 04:10:29.564197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.459 [2024-07-23 04:10:29.619138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:37.027 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.027 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:37.027 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.aXLA6hudFP 00:14:37.287 [2024-07-23 04:10:30.480529] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.287 [2024-07-23 04:10:30.480637] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:37.287 [2024-07-23 04:10:30.490220] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:37.287 [2024-07-23 04:10:30.490269] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:37.287 [2024-07-23 04:10:30.490331] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:37.287 [2024-07-23 04:10:30.491181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x553f00 (107): Transport endpoint is not connected 00:14:37.287 [2024-07-23 04:10:30.492170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x553f00 (9): Bad file descriptor 00:14:37.287 [2024-07-23 04:10:30.493165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:37.287 [2024-07-23 04:10:30.493190] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:37.287 [2024-07-23 04:10:30.493204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:37.287 request: 00:14:37.287 { 00:14:37.287 "name": "TLSTEST", 00:14:37.287 "trtype": "tcp", 00:14:37.287 "traddr": "10.0.0.2", 00:14:37.287 "adrfam": "ipv4", 00:14:37.287 "trsvcid": "4420", 00:14:37.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.287 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:37.287 "prchk_reftag": false, 00:14:37.287 "prchk_guard": false, 00:14:37.287 "hdgst": false, 00:14:37.287 "ddgst": false, 00:14:37.287 "psk": "/tmp/tmp.aXLA6hudFP", 00:14:37.287 "method": "bdev_nvme_attach_controller", 00:14:37.287 "req_id": 1 00:14:37.287 } 00:14:37.287 Got JSON-RPC error response 00:14:37.287 response: 00:14:37.287 { 00:14:37.287 "code": -5, 00:14:37.287 "message": "Input/output error" 00:14:37.287 } 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 86692 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86692 ']' 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86692 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86692 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:37.287 killing process with pid 86692 00:14:37.287 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.287 00:14:37.287 Latency(us) 00:14:37.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.287 =================================================================================================================== 00:14:37.287 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86692' 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86692 00:14:37.287 [2024-07-23 04:10:30.539433] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:37.287 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86692 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aXLA6hudFP 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aXLA6hudFP 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aXLA6hudFP 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aXLA6hudFP' 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86715 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86715 /var/tmp/bdevperf.sock 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86715 ']' 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.547 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.547 [2024-07-23 04:10:30.777443] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:37.547 [2024-07-23 04:10:30.777544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86715 ] 00:14:37.806 [2024-07-23 04:10:30.899796] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:37.806 [2024-07-23 04:10:30.915892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.806 [2024-07-23 04:10:30.974028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.806 [2024-07-23 04:10:31.024664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:38.374 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.374 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:38.374 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aXLA6hudFP 00:14:38.633 [2024-07-23 04:10:31.867963] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.633 [2024-07-23 04:10:31.868086] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:38.633 [2024-07-23 04:10:31.873120] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:38.633 [2024-07-23 04:10:31.873160] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:38.633 [2024-07-23 04:10:31.873215] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:38.633 [2024-07-23 04:10:31.873841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57f00 (107): Transport endpoint is not connected 00:14:38.633 [2024-07-23 04:10:31.874827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57f00 (9): Bad file descriptor 00:14:38.633 [2024-07-23 04:10:31.875823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:38.633 [2024-07-23 04:10:31.875865] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:38.633 [2024-07-23 04:10:31.875896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:38.633 request: 00:14:38.633 { 00:14:38.633 "name": "TLSTEST", 00:14:38.633 "trtype": "tcp", 00:14:38.633 "traddr": "10.0.0.2", 00:14:38.633 "adrfam": "ipv4", 00:14:38.633 "trsvcid": "4420", 00:14:38.633 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:38.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.633 "prchk_reftag": false, 00:14:38.633 "prchk_guard": false, 00:14:38.633 "hdgst": false, 00:14:38.633 "ddgst": false, 00:14:38.633 "psk": "/tmp/tmp.aXLA6hudFP", 00:14:38.633 "method": "bdev_nvme_attach_controller", 00:14:38.633 "req_id": 1 00:14:38.633 } 00:14:38.633 Got JSON-RPC error response 00:14:38.633 response: 00:14:38.633 { 00:14:38.633 "code": -5, 00:14:38.633 "message": "Input/output error" 00:14:38.633 } 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 86715 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86715 ']' 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86715 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86715 00:14:38.633 killing process with pid 86715 00:14:38.633 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.633 00:14:38.633 Latency(us) 00:14:38.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.633 =================================================================================================================== 00:14:38.633 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86715' 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86715 00:14:38.633 [2024-07-23 04:10:31.920601] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:38.633 04:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86715 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86741 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86741 /var/tmp/bdevperf.sock 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86741 ']' 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.893 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.893 [2024-07-23 04:10:32.150271] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:38.893 [2024-07-23 04:10:32.150389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86741 ] 00:14:39.152 [2024-07-23 04:10:32.266631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:39.152 [2024-07-23 04:10:32.284831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.152 [2024-07-23 04:10:32.342182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.152 [2024-07-23 04:10:32.397787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:39.152 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.152 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:39.152 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:39.411 [2024-07-23 04:10:32.691492] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:39.411 [2024-07-23 04:10:32.693445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ecb20 (9): Bad file descriptor 00:14:39.411 [2024-07-23 04:10:32.694441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:39.411 [2024-07-23 04:10:32.694483] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:39.411 [2024-07-23 04:10:32.694513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:39.411 request: 00:14:39.411 { 00:14:39.411 "name": "TLSTEST", 00:14:39.411 "trtype": "tcp", 00:14:39.411 "traddr": "10.0.0.2", 00:14:39.411 "adrfam": "ipv4", 00:14:39.411 "trsvcid": "4420", 00:14:39.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.411 "prchk_reftag": false, 00:14:39.411 "prchk_guard": false, 00:14:39.411 "hdgst": false, 00:14:39.411 "ddgst": false, 00:14:39.411 "method": "bdev_nvme_attach_controller", 00:14:39.411 "req_id": 1 00:14:39.411 } 00:14:39.411 Got JSON-RPC error response 00:14:39.411 response: 00:14:39.411 { 00:14:39.411 "code": -5, 00:14:39.411 "message": "Input/output error" 00:14:39.411 } 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 86741 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86741 ']' 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86741 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86741 00:14:39.411 killing process with pid 86741 00:14:39.411 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.411 00:14:39.411 Latency(us) 00:14:39.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.411 =================================================================================================================== 00:14:39.411 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86741' 00:14:39.411 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86741 00:14:39.412 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86741 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 86313 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86313 ']' 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86313 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86313 00:14:39.671 killing process with pid 86313 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86313' 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86313 00:14:39.671 [2024-07-23 04:10:32.933039] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:39.671 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86313 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.GOCmR5ZfC5 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.GOCmR5ZfC5 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86771 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86771 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86771 ']' 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.930 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.930 [2024-07-23 04:10:33.247920] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:39.930 [2024-07-23 04:10:33.248010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.188 [2024-07-23 04:10:33.363588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:40.188 [2024-07-23 04:10:33.373033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.188 [2024-07-23 04:10:33.440655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.188 [2024-07-23 04:10:33.440719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.188 [2024-07-23 04:10:33.440730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.188 [2024-07-23 04:10:33.440737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.188 [2024-07-23 04:10:33.440743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.188 [2024-07-23 04:10:33.440769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.188 [2024-07-23 04:10:33.492858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.GOCmR5ZfC5 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GOCmR5ZfC5 00:14:40.447 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:40.706 [2024-07-23 04:10:33.837113] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.706 04:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:40.965 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:41.224 [2024-07-23 04:10:34.333203] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.224 [2024-07-23 04:10:34.333417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.224 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.483 malloc0 00:14:41.483 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:41.483 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GOCmR5ZfC5 00:14:41.742 [2024-07-23 04:10:34.979485] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GOCmR5ZfC5 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GOCmR5ZfC5' 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86813 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86813 /var/tmp/bdevperf.sock 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86813 ']' 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.742 04:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.742 [2024-07-23 04:10:35.039895] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:41.742 [2024-07-23 04:10:35.039988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86813 ] 00:14:42.000 [2024-07-23 04:10:35.156224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:42.000 [2024-07-23 04:10:35.176153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.000 [2024-07-23 04:10:35.241082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.000 [2024-07-23 04:10:35.299024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:42.568 04:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.568 04:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:42.568 04:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GOCmR5ZfC5 00:14:42.827 [2024-07-23 04:10:36.112875] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.827 [2024-07-23 04:10:36.113005] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:43.085 TLSTESTn1 00:14:43.085 04:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:43.085 Running I/O for 10 seconds... 00:14:53.078 00:14:53.078 Latency(us) 00:14:53.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.078 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:53.078 Verification LBA range: start 0x0 length 0x2000 00:14:53.078 TLSTESTn1 : 10.02 4639.35 18.12 0.00 0.00 27539.37 6434.44 18945.86 00:14:53.078 =================================================================================================================== 00:14:53.078 Total : 4639.35 18.12 0.00 0.00 27539.37 6434.44 18945.86 00:14:53.078 0 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 86813 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86813 ']' 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86813 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86813 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:53.079 killing process with pid 86813 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86813' 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86813 00:14:53.079 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.079 00:14:53.079 Latency(us) 00:14:53.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.079 =================================================================================================================== 00:14:53.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.079 [2024-07-23 04:10:46.347647] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:53.079 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86813 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.GOCmR5ZfC5 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GOCmR5ZfC5 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GOCmR5ZfC5 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GOCmR5ZfC5 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GOCmR5ZfC5' 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86942 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86942 /var/tmp/bdevperf.sock 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86942 ']' 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.338 04:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.338 [2024-07-23 04:10:46.593312] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:53.338 [2024-07-23 04:10:46.593404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86942 ] 00:14:53.597 [2024-07-23 04:10:46.714935] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:53.597 [2024-07-23 04:10:46.733490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.597 [2024-07-23 04:10:46.801672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.597 [2024-07-23 04:10:46.852925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GOCmR5ZfC5 00:14:54.533 [2024-07-23 04:10:47.776057] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.533 [2024-07-23 04:10:47.776798] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:54.533 [2024-07-23 04:10:47.776845] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.GOCmR5ZfC5 00:14:54.533 request: 00:14:54.533 { 00:14:54.533 "name": "TLSTEST", 00:14:54.533 "trtype": "tcp", 00:14:54.533 "traddr": "10.0.0.2", 00:14:54.533 "adrfam": "ipv4", 00:14:54.533 "trsvcid": "4420", 00:14:54.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.533 "prchk_reftag": false, 00:14:54.533 "prchk_guard": false, 00:14:54.533 "hdgst": false, 00:14:54.533 "ddgst": false, 00:14:54.533 "psk": "/tmp/tmp.GOCmR5ZfC5", 00:14:54.533 "method": "bdev_nvme_attach_controller", 00:14:54.533 "req_id": 1 00:14:54.533 } 00:14:54.533 Got JSON-RPC error response 00:14:54.533 response: 00:14:54.533 { 00:14:54.533 "code": -1, 00:14:54.533 "message": "Operation not permitted" 00:14:54.533 } 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 86942 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86942 ']' 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86942 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86942 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:54.533 killing process with pid 86942 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86942' 00:14:54.533 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86942 00:14:54.533 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.533 00:14:54.533 Latency(us) 00:14:54.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.534 =================================================================================================================== 00:14:54.534 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.534 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86942 00:14:54.792 04:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 86771 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86771 ']' 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86771 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86771 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:54.792 killing process with pid 86771 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86771' 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86771 00:14:54.792 [2024-07-23 04:10:48.024973] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:54.792 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86771 00:14:55.050 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:55.050 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.050 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.050 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.050 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:55.050 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86980 00:14:55.051 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86980 00:14:55.051 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86980 ']' 00:14:55.051 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.051 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.051 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.051 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.051 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.051 [2024-07-23 04:10:48.269760] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:55.051 [2024-07-23 04:10:48.269861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.051 [2024-07-23 04:10:48.385578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:55.309 [2024-07-23 04:10:48.399618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.309 [2024-07-23 04:10:48.455077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.309 [2024-07-23 04:10:48.455154] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.309 [2024-07-23 04:10:48.455181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.309 [2024-07-23 04:10:48.455188] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.309 [2024-07-23 04:10:48.455195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.309 [2024-07-23 04:10:48.455221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.309 [2024-07-23 04:10:48.507527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.GOCmR5ZfC5 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.GOCmR5ZfC5 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.GOCmR5ZfC5 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GOCmR5ZfC5 00:14:55.309 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:55.567 [2024-07-23 04:10:48.788285] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.567 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:55.826 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:56.084 [2024-07-23 04:10:49.236332] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:56.084 [2024-07-23 04:10:49.236541] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.084 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:56.343 malloc0 00:14:56.343 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:56.343 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GOCmR5ZfC5 00:14:56.601 [2024-07-23 04:10:49.814735] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:56.601 [2024-07-23 04:10:49.814773] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:56.601 [2024-07-23 04:10:49.814818] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:56.601 request: 00:14:56.601 { 00:14:56.601 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.601 "host": "nqn.2016-06.io.spdk:host1", 00:14:56.601 "psk": "/tmp/tmp.GOCmR5ZfC5", 00:14:56.601 "method": "nvmf_subsystem_add_host", 00:14:56.601 "req_id": 1 00:14:56.601 } 00:14:56.601 Got JSON-RPC error response 00:14:56.601 response: 00:14:56.601 { 00:14:56.601 "code": -32603, 00:14:56.601 "message": "Internal error" 00:14:56.601 } 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 86980 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86980 ']' 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86980 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86980 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:56.601 killing process with pid 86980 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86980' 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86980 00:14:56.601 04:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86980 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.GOCmR5ZfC5 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=87034 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 87034 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87034 ']' 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.908 04:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.908 [2024-07-23 04:10:50.152329] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:56.908 [2024-07-23 04:10:50.152416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.166 [2024-07-23 04:10:50.274495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:57.166 [2024-07-23 04:10:50.291135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.166 [2024-07-23 04:10:50.350060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.166 [2024-07-23 04:10:50.350129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.166 [2024-07-23 04:10:50.350155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.166 [2024-07-23 04:10:50.350162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.166 [2024-07-23 04:10:50.350169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.166 [2024-07-23 04:10:50.350194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.166 [2024-07-23 04:10:50.401925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.GOCmR5ZfC5 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GOCmR5ZfC5 00:14:57.749 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:58.007 [2024-07-23 04:10:51.329841] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.007 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:58.266 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:58.524 [2024-07-23 04:10:51.785903] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.524 [2024-07-23 04:10:51.786152] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.524 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.782 malloc0 00:14:58.782 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:59.041 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GOCmR5ZfC5 00:14:59.300 [2024-07-23 04:10:52.432227] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=87084 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 87084 /var/tmp/bdevperf.sock 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87084 ']' 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.300 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.300 [2024-07-23 04:10:52.484276] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:14:59.300 [2024-07-23 04:10:52.484373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87084 ] 00:14:59.300 [2024-07-23 04:10:52.600071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:59.301 [2024-07-23 04:10:52.620028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.559 [2024-07-23 04:10:52.686722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.559 [2024-07-23 04:10:52.743043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:00.137 04:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.137 04:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:00.137 04:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GOCmR5ZfC5 00:15:00.403 [2024-07-23 04:10:53.641311] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.403 [2024-07-23 04:10:53.641414] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:00.403 TLSTESTn1 00:15:00.403 04:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:00.970 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:00.970 "subsystems": [ 00:15:00.970 { 00:15:00.970 "subsystem": "keyring", 00:15:00.970 "config": [] 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "subsystem": "iobuf", 00:15:00.970 "config": [ 00:15:00.970 { 00:15:00.970 "method": "iobuf_set_options", 00:15:00.970 "params": { 00:15:00.970 "small_pool_count": 8192, 00:15:00.970 "large_pool_count": 1024, 00:15:00.970 "small_bufsize": 8192, 00:15:00.970 "large_bufsize": 135168 00:15:00.970 } 00:15:00.970 } 00:15:00.970 ] 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "subsystem": "sock", 00:15:00.970 "config": [ 00:15:00.970 { 00:15:00.970 "method": "sock_set_default_impl", 00:15:00.970 "params": { 00:15:00.970 "impl_name": "uring" 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "sock_impl_set_options", 00:15:00.970 "params": { 00:15:00.970 "impl_name": "ssl", 00:15:00.970 "recv_buf_size": 4096, 00:15:00.970 "send_buf_size": 4096, 00:15:00.970 "enable_recv_pipe": true, 00:15:00.970 "enable_quickack": false, 00:15:00.970 "enable_placement_id": 0, 00:15:00.970 "enable_zerocopy_send_server": true, 00:15:00.970 "enable_zerocopy_send_client": false, 00:15:00.970 "zerocopy_threshold": 0, 00:15:00.970 "tls_version": 0, 00:15:00.970 "enable_ktls": false 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "sock_impl_set_options", 00:15:00.970 "params": { 00:15:00.970 "impl_name": "posix", 00:15:00.970 "recv_buf_size": 2097152, 00:15:00.970 "send_buf_size": 2097152, 00:15:00.970 "enable_recv_pipe": true, 00:15:00.970 "enable_quickack": false, 00:15:00.970 "enable_placement_id": 0, 00:15:00.970 "enable_zerocopy_send_server": true, 00:15:00.970 "enable_zerocopy_send_client": false, 00:15:00.970 "zerocopy_threshold": 0, 00:15:00.970 "tls_version": 0, 00:15:00.970 "enable_ktls": false 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "sock_impl_set_options", 00:15:00.970 "params": { 00:15:00.970 "impl_name": "uring", 00:15:00.970 "recv_buf_size": 2097152, 00:15:00.970 "send_buf_size": 2097152, 00:15:00.970 "enable_recv_pipe": true, 00:15:00.970 "enable_quickack": false, 00:15:00.970 "enable_placement_id": 0, 00:15:00.970 "enable_zerocopy_send_server": false, 00:15:00.970 "enable_zerocopy_send_client": false, 00:15:00.970 "zerocopy_threshold": 0, 00:15:00.970 "tls_version": 0, 00:15:00.970 "enable_ktls": false 00:15:00.970 } 00:15:00.970 } 00:15:00.970 ] 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "subsystem": "vmd", 00:15:00.970 "config": [] 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "subsystem": "accel", 00:15:00.970 "config": [ 00:15:00.970 { 00:15:00.970 "method": "accel_set_options", 00:15:00.970 "params": { 00:15:00.970 "small_cache_size": 128, 00:15:00.970 "large_cache_size": 16, 00:15:00.970 "task_count": 2048, 00:15:00.970 "sequence_count": 2048, 00:15:00.970 "buf_count": 2048 00:15:00.970 } 00:15:00.970 } 00:15:00.970 ] 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "subsystem": "bdev", 00:15:00.970 "config": [ 00:15:00.970 { 00:15:00.970 "method": "bdev_set_options", 00:15:00.970 "params": { 00:15:00.970 "bdev_io_pool_size": 65535, 00:15:00.970 "bdev_io_cache_size": 256, 00:15:00.970 "bdev_auto_examine": true, 00:15:00.970 "iobuf_small_cache_size": 128, 00:15:00.970 "iobuf_large_cache_size": 16 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "bdev_raid_set_options", 00:15:00.970 "params": { 00:15:00.970 "process_window_size_kb": 1024, 00:15:00.970 "process_max_bandwidth_mb_sec": 0 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "bdev_iscsi_set_options", 00:15:00.970 "params": { 00:15:00.970 "timeout_sec": 30 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "bdev_nvme_set_options", 00:15:00.970 "params": { 00:15:00.970 "action_on_timeout": "none", 00:15:00.970 "timeout_us": 0, 00:15:00.970 "timeout_admin_us": 0, 00:15:00.970 "keep_alive_timeout_ms": 10000, 00:15:00.970 "arbitration_burst": 0, 00:15:00.970 "low_priority_weight": 0, 00:15:00.970 "medium_priority_weight": 0, 00:15:00.970 "high_priority_weight": 0, 00:15:00.970 "nvme_adminq_poll_period_us": 10000, 00:15:00.970 "nvme_ioq_poll_period_us": 0, 00:15:00.970 "io_queue_requests": 0, 00:15:00.970 "delay_cmd_submit": true, 00:15:00.970 "transport_retry_count": 4, 00:15:00.970 "bdev_retry_count": 3, 00:15:00.970 "transport_ack_timeout": 0, 00:15:00.970 "ctrlr_loss_timeout_sec": 0, 00:15:00.970 "reconnect_delay_sec": 0, 00:15:00.970 "fast_io_fail_timeout_sec": 0, 00:15:00.970 "disable_auto_failback": false, 00:15:00.970 "generate_uuids": false, 00:15:00.970 "transport_tos": 0, 00:15:00.970 "nvme_error_stat": false, 00:15:00.970 "rdma_srq_size": 0, 00:15:00.970 "io_path_stat": false, 00:15:00.970 "allow_accel_sequence": false, 00:15:00.970 "rdma_max_cq_size": 0, 00:15:00.970 "rdma_cm_event_timeout_ms": 0, 00:15:00.970 "dhchap_digests": [ 00:15:00.970 "sha256", 00:15:00.970 "sha384", 00:15:00.970 "sha512" 00:15:00.970 ], 00:15:00.970 "dhchap_dhgroups": [ 00:15:00.970 "null", 00:15:00.970 "ffdhe2048", 00:15:00.970 "ffdhe3072", 00:15:00.970 "ffdhe4096", 00:15:00.970 "ffdhe6144", 00:15:00.970 "ffdhe8192" 00:15:00.970 ] 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "bdev_nvme_set_hotplug", 00:15:00.970 "params": { 00:15:00.970 "period_us": 100000, 00:15:00.970 "enable": false 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "bdev_malloc_create", 00:15:00.970 "params": { 00:15:00.970 "name": "malloc0", 00:15:00.970 "num_blocks": 8192, 00:15:00.970 "block_size": 4096, 00:15:00.970 "physical_block_size": 4096, 00:15:00.970 "uuid": "d38b4350-b207-4c00-9c60-cf032e2bfe77", 00:15:00.970 "optimal_io_boundary": 0, 00:15:00.970 "md_size": 0, 00:15:00.970 "dif_type": 0, 00:15:00.970 "dif_is_head_of_md": false, 00:15:00.970 "dif_pi_format": 0 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "bdev_wait_for_examine" 00:15:00.970 } 00:15:00.970 ] 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "subsystem": "nbd", 00:15:00.970 "config": [] 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "subsystem": "scheduler", 00:15:00.970 "config": [ 00:15:00.970 { 00:15:00.970 "method": "framework_set_scheduler", 00:15:00.970 "params": { 00:15:00.970 "name": "static" 00:15:00.970 } 00:15:00.970 } 00:15:00.970 ] 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "subsystem": "nvmf", 00:15:00.970 "config": [ 00:15:00.970 { 00:15:00.970 "method": "nvmf_set_config", 00:15:00.970 "params": { 00:15:00.970 "discovery_filter": "match_any", 00:15:00.970 "admin_cmd_passthru": { 00:15:00.970 "identify_ctrlr": false 00:15:00.970 } 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "nvmf_set_max_subsystems", 00:15:00.970 "params": { 00:15:00.970 "max_subsystems": 1024 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "nvmf_set_crdt", 00:15:00.970 "params": { 00:15:00.970 "crdt1": 0, 00:15:00.970 "crdt2": 0, 00:15:00.970 "crdt3": 0 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "nvmf_create_transport", 00:15:00.970 "params": { 00:15:00.970 "trtype": "TCP", 00:15:00.970 "max_queue_depth": 128, 00:15:00.970 "max_io_qpairs_per_ctrlr": 127, 00:15:00.970 "in_capsule_data_size": 4096, 00:15:00.970 "max_io_size": 131072, 00:15:00.970 "io_unit_size": 131072, 00:15:00.970 "max_aq_depth": 128, 00:15:00.970 "num_shared_buffers": 511, 00:15:00.970 "buf_cache_size": 4294967295, 00:15:00.970 "dif_insert_or_strip": false, 00:15:00.970 "zcopy": false, 00:15:00.970 "c2h_success": false, 00:15:00.970 "sock_priority": 0, 00:15:00.970 "abort_timeout_sec": 1, 00:15:00.970 "ack_timeout": 0, 00:15:00.970 "data_wr_pool_size": 0 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "nvmf_create_subsystem", 00:15:00.970 "params": { 00:15:00.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.970 "allow_any_host": false, 00:15:00.970 "serial_number": "SPDK00000000000001", 00:15:00.970 "model_number": "SPDK bdev Controller", 00:15:00.970 "max_namespaces": 10, 00:15:00.970 "min_cntlid": 1, 00:15:00.970 "max_cntlid": 65519, 00:15:00.970 "ana_reporting": false 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "nvmf_subsystem_add_host", 00:15:00.970 "params": { 00:15:00.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.970 "host": "nqn.2016-06.io.spdk:host1", 00:15:00.970 "psk": "/tmp/tmp.GOCmR5ZfC5" 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "nvmf_subsystem_add_ns", 00:15:00.970 "params": { 00:15:00.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.970 "namespace": { 00:15:00.970 "nsid": 1, 00:15:00.970 "bdev_name": "malloc0", 00:15:00.970 "nguid": "D38B4350B2074C009C60CF032E2BFE77", 00:15:00.970 "uuid": "d38b4350-b207-4c00-9c60-cf032e2bfe77", 00:15:00.970 "no_auto_visible": false 00:15:00.970 } 00:15:00.970 } 00:15:00.970 }, 00:15:00.970 { 00:15:00.970 "method": "nvmf_subsystem_add_listener", 00:15:00.970 "params": { 00:15:00.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.970 "listen_address": { 00:15:00.970 "trtype": "TCP", 00:15:00.970 "adrfam": "IPv4", 00:15:00.970 "traddr": "10.0.0.2", 00:15:00.970 "trsvcid": "4420" 00:15:00.970 }, 00:15:00.970 "secure_channel": true 00:15:00.970 } 00:15:00.970 } 00:15:00.970 ] 00:15:00.970 } 00:15:00.970 ] 00:15:00.970 }' 00:15:00.970 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:01.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:01.229 "subsystems": [ 00:15:01.229 { 00:15:01.229 "subsystem": "keyring", 00:15:01.229 "config": [] 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "subsystem": "iobuf", 00:15:01.229 "config": [ 00:15:01.229 { 00:15:01.229 "method": "iobuf_set_options", 00:15:01.229 "params": { 00:15:01.229 "small_pool_count": 8192, 00:15:01.229 "large_pool_count": 1024, 00:15:01.229 "small_bufsize": 8192, 00:15:01.229 "large_bufsize": 135168 00:15:01.229 } 00:15:01.229 } 00:15:01.229 ] 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "subsystem": "sock", 00:15:01.229 "config": [ 00:15:01.229 { 00:15:01.229 "method": "sock_set_default_impl", 00:15:01.229 "params": { 00:15:01.229 "impl_name": "uring" 00:15:01.229 } 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "method": "sock_impl_set_options", 00:15:01.229 "params": { 00:15:01.229 "impl_name": "ssl", 00:15:01.229 "recv_buf_size": 4096, 00:15:01.229 "send_buf_size": 4096, 00:15:01.229 "enable_recv_pipe": true, 00:15:01.229 "enable_quickack": false, 00:15:01.229 "enable_placement_id": 0, 00:15:01.229 "enable_zerocopy_send_server": true, 00:15:01.229 "enable_zerocopy_send_client": false, 00:15:01.229 "zerocopy_threshold": 0, 00:15:01.229 "tls_version": 0, 00:15:01.229 "enable_ktls": false 00:15:01.229 } 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "method": "sock_impl_set_options", 00:15:01.229 "params": { 00:15:01.229 "impl_name": "posix", 00:15:01.229 "recv_buf_size": 2097152, 00:15:01.229 "send_buf_size": 2097152, 00:15:01.229 "enable_recv_pipe": true, 00:15:01.229 "enable_quickack": false, 00:15:01.229 "enable_placement_id": 0, 00:15:01.229 "enable_zerocopy_send_server": true, 00:15:01.229 "enable_zerocopy_send_client": false, 00:15:01.229 "zerocopy_threshold": 0, 00:15:01.229 "tls_version": 0, 00:15:01.229 "enable_ktls": false 00:15:01.229 } 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "method": "sock_impl_set_options", 00:15:01.229 "params": { 00:15:01.229 "impl_name": "uring", 00:15:01.229 "recv_buf_size": 2097152, 00:15:01.229 "send_buf_size": 2097152, 00:15:01.229 "enable_recv_pipe": true, 00:15:01.229 "enable_quickack": false, 00:15:01.229 "enable_placement_id": 0, 00:15:01.229 "enable_zerocopy_send_server": false, 00:15:01.229 "enable_zerocopy_send_client": false, 00:15:01.229 "zerocopy_threshold": 0, 00:15:01.229 "tls_version": 0, 00:15:01.229 "enable_ktls": false 00:15:01.229 } 00:15:01.229 } 00:15:01.229 ] 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "subsystem": "vmd", 00:15:01.229 "config": [] 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "subsystem": "accel", 00:15:01.229 "config": [ 00:15:01.229 { 00:15:01.229 "method": "accel_set_options", 00:15:01.229 "params": { 00:15:01.229 "small_cache_size": 128, 00:15:01.229 "large_cache_size": 16, 00:15:01.229 "task_count": 2048, 00:15:01.229 "sequence_count": 2048, 00:15:01.229 "buf_count": 2048 00:15:01.229 } 00:15:01.229 } 00:15:01.229 ] 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "subsystem": "bdev", 00:15:01.229 "config": [ 00:15:01.229 { 00:15:01.229 "method": "bdev_set_options", 00:15:01.229 "params": { 00:15:01.229 "bdev_io_pool_size": 65535, 00:15:01.229 "bdev_io_cache_size": 256, 00:15:01.229 "bdev_auto_examine": true, 00:15:01.229 "iobuf_small_cache_size": 128, 00:15:01.229 "iobuf_large_cache_size": 16 00:15:01.229 } 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "method": "bdev_raid_set_options", 00:15:01.229 "params": { 00:15:01.229 "process_window_size_kb": 1024, 00:15:01.229 "process_max_bandwidth_mb_sec": 0 00:15:01.229 } 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "method": "bdev_iscsi_set_options", 00:15:01.229 "params": { 00:15:01.229 "timeout_sec": 30 00:15:01.229 } 00:15:01.229 }, 00:15:01.229 { 00:15:01.229 "method": "bdev_nvme_set_options", 00:15:01.229 "params": { 00:15:01.229 "action_on_timeout": "none", 00:15:01.229 "timeout_us": 0, 00:15:01.229 "timeout_admin_us": 0, 00:15:01.229 "keep_alive_timeout_ms": 10000, 00:15:01.229 "arbitration_burst": 0, 00:15:01.229 "low_priority_weight": 0, 00:15:01.229 "medium_priority_weight": 0, 00:15:01.229 "high_priority_weight": 0, 00:15:01.229 "nvme_adminq_poll_period_us": 10000, 00:15:01.229 "nvme_ioq_poll_period_us": 0, 00:15:01.229 "io_queue_requests": 512, 00:15:01.229 "delay_cmd_submit": true, 00:15:01.229 "transport_retry_count": 4, 00:15:01.229 "bdev_retry_count": 3, 00:15:01.229 "transport_ack_timeout": 0, 00:15:01.229 "ctrlr_loss_timeout_sec": 0, 00:15:01.229 "reconnect_delay_sec": 0, 00:15:01.229 "fast_io_fail_timeout_sec": 0, 00:15:01.229 "disable_auto_failback": false, 00:15:01.229 "generate_uuids": false, 00:15:01.229 "transport_tos": 0, 00:15:01.229 "nvme_error_stat": false, 00:15:01.229 "rdma_srq_size": 0, 00:15:01.229 "io_path_stat": false, 00:15:01.229 "allow_accel_sequence": false, 00:15:01.229 "rdma_max_cq_size": 0, 00:15:01.229 "rdma_cm_event_timeout_ms": 0, 00:15:01.229 "dhchap_digests": [ 00:15:01.229 "sha256", 00:15:01.229 "sha384", 00:15:01.229 "sha512" 00:15:01.229 ], 00:15:01.229 "dhchap_dhgroups": [ 00:15:01.229 "null", 00:15:01.229 "ffdhe2048", 00:15:01.229 "ffdhe3072", 00:15:01.229 "ffdhe4096", 00:15:01.229 "ffdhe6144", 00:15:01.229 "ffdhe8192" 00:15:01.229 ] 00:15:01.229 } 00:15:01.230 }, 00:15:01.230 { 00:15:01.230 "method": "bdev_nvme_attach_controller", 00:15:01.230 "params": { 00:15:01.230 "name": "TLSTEST", 00:15:01.230 "trtype": "TCP", 00:15:01.230 "adrfam": "IPv4", 00:15:01.230 "traddr": "10.0.0.2", 00:15:01.230 "trsvcid": "4420", 00:15:01.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.230 "prchk_reftag": false, 00:15:01.230 "prchk_guard": false, 00:15:01.230 "ctrlr_loss_timeout_sec": 0, 00:15:01.230 "reconnect_delay_sec": 0, 00:15:01.230 "fast_io_fail_timeout_sec": 0, 00:15:01.230 "psk": "/tmp/tmp.GOCmR5ZfC5", 00:15:01.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.230 "hdgst": false, 00:15:01.230 "ddgst": false 00:15:01.230 } 00:15:01.230 }, 00:15:01.230 { 00:15:01.230 "method": "bdev_nvme_set_hotplug", 00:15:01.230 "params": { 00:15:01.230 "period_us": 100000, 00:15:01.230 "enable": false 00:15:01.230 } 00:15:01.230 }, 00:15:01.230 { 00:15:01.230 "method": "bdev_wait_for_examine" 00:15:01.230 } 00:15:01.230 ] 00:15:01.230 }, 00:15:01.230 { 00:15:01.230 "subsystem": "nbd", 00:15:01.230 "config": [] 00:15:01.230 } 00:15:01.230 ] 00:15:01.230 }' 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 87084 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87084 ']' 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87084 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87084 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:01.230 killing process with pid 87084 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87084' 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87084 00:15:01.230 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.230 00:15:01.230 Latency(us) 00:15:01.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.230 =================================================================================================================== 00:15:01.230 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:01.230 [2024-07-23 04:10:54.341417] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87084 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 87034 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87034 ']' 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87034 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87034 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87034' 00:15:01.230 killing process with pid 87034 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87034 00:15:01.230 [2024-07-23 04:10:54.548768] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:01.230 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87034 00:15:01.489 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:01.489 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.489 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.489 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.489 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:01.489 "subsystems": [ 00:15:01.489 { 00:15:01.489 "subsystem": "keyring", 00:15:01.489 "config": [] 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "subsystem": "iobuf", 00:15:01.489 "config": [ 00:15:01.489 { 00:15:01.489 "method": "iobuf_set_options", 00:15:01.489 "params": { 00:15:01.489 "small_pool_count": 8192, 00:15:01.489 "large_pool_count": 1024, 00:15:01.489 "small_bufsize": 8192, 00:15:01.489 "large_bufsize": 135168 00:15:01.489 } 00:15:01.489 } 00:15:01.489 ] 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "subsystem": "sock", 00:15:01.489 "config": [ 00:15:01.489 { 00:15:01.489 "method": "sock_set_default_impl", 00:15:01.489 "params": { 00:15:01.489 "impl_name": "uring" 00:15:01.489 } 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "method": "sock_impl_set_options", 00:15:01.489 "params": { 00:15:01.489 "impl_name": "ssl", 00:15:01.489 "recv_buf_size": 4096, 00:15:01.489 "send_buf_size": 4096, 00:15:01.489 "enable_recv_pipe": true, 00:15:01.489 "enable_quickack": false, 00:15:01.489 "enable_placement_id": 0, 00:15:01.489 "enable_zerocopy_send_server": true, 00:15:01.489 "enable_zerocopy_send_client": false, 00:15:01.489 "zerocopy_threshold": 0, 00:15:01.489 "tls_version": 0, 00:15:01.489 "enable_ktls": false 00:15:01.489 } 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "method": "sock_impl_set_options", 00:15:01.489 "params": { 00:15:01.489 "impl_name": "posix", 00:15:01.489 "recv_buf_size": 2097152, 00:15:01.489 "send_buf_size": 2097152, 00:15:01.489 "enable_recv_pipe": true, 00:15:01.489 "enable_quickack": false, 00:15:01.489 "enable_placement_id": 0, 00:15:01.489 "enable_zerocopy_send_server": true, 00:15:01.489 "enable_zerocopy_send_client": false, 00:15:01.489 "zerocopy_threshold": 0, 00:15:01.489 "tls_version": 0, 00:15:01.489 "enable_ktls": false 00:15:01.489 } 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "method": "sock_impl_set_options", 00:15:01.489 "params": { 00:15:01.489 "impl_name": "uring", 00:15:01.489 "recv_buf_size": 2097152, 00:15:01.489 "send_buf_size": 2097152, 00:15:01.489 "enable_recv_pipe": true, 00:15:01.489 "enable_quickack": false, 00:15:01.489 "enable_placement_id": 0, 00:15:01.489 "enable_zerocopy_send_server": false, 00:15:01.489 "enable_zerocopy_send_client": false, 00:15:01.489 "zerocopy_threshold": 0, 00:15:01.489 "tls_version": 0, 00:15:01.489 "enable_ktls": false 00:15:01.489 } 00:15:01.489 } 00:15:01.489 ] 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "subsystem": "vmd", 00:15:01.489 "config": [] 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "subsystem": "accel", 00:15:01.489 "config": [ 00:15:01.489 { 00:15:01.489 "method": "accel_set_options", 00:15:01.489 "params": { 00:15:01.489 "small_cache_size": 128, 00:15:01.489 "large_cache_size": 16, 00:15:01.489 "task_count": 2048, 00:15:01.489 "sequence_count": 2048, 00:15:01.489 "buf_count": 2048 00:15:01.489 } 00:15:01.489 } 00:15:01.489 ] 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "subsystem": "bdev", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "bdev_set_options", 00:15:01.490 "params": { 00:15:01.490 "bdev_io_pool_size": 65535, 00:15:01.490 "bdev_io_cache_size": 256, 00:15:01.490 "bdev_auto_examine": true, 00:15:01.490 "iobuf_small_cache_size": 128, 00:15:01.490 "iobuf_large_cache_size": 16 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_raid_set_options", 00:15:01.490 "params": { 00:15:01.490 "process_window_size_kb": 1024, 00:15:01.490 "process_max_bandwidth_mb_sec": 0 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_iscsi_set_options", 00:15:01.490 "params": { 00:15:01.490 "timeout_sec": 30 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_nvme_set_options", 00:15:01.490 "params": { 00:15:01.490 "action_on_timeout": "none", 00:15:01.490 "timeout_us": 0, 00:15:01.490 "timeout_admin_us": 0, 00:15:01.490 "keep_alive_timeout_ms": 10000, 00:15:01.490 "arbitration_burst": 0, 00:15:01.490 "low_priority_weight": 0, 00:15:01.490 "medium_priority_weight": 0, 00:15:01.490 "high_priority_weight": 0, 00:15:01.490 "nvme_adminq_poll_period_us": 10000, 00:15:01.490 "nvme_ioq_poll_period_us": 0, 00:15:01.490 "io_queue_requests": 0, 00:15:01.490 "delay_cmd_submit": true, 00:15:01.490 "transport_retry_count": 4, 00:15:01.490 "bdev_retry_count": 3, 00:15:01.490 "transport_ack_timeout": 0, 00:15:01.490 "ctrlr_loss_timeout_sec": 0, 00:15:01.490 "reconnect_delay_sec": 0, 00:15:01.490 "fast_io_fail_timeout_sec": 0, 00:15:01.490 "disable_auto_failback": false, 00:15:01.490 "generate_uuids": false, 00:15:01.490 "transport_tos": 0, 00:15:01.490 "nvme_error_stat": false, 00:15:01.490 "rdma_srq_size": 0, 00:15:01.490 "io_path_stat": false, 00:15:01.490 "allow_accel_sequence": false, 00:15:01.490 "rdma_max_cq_size": 0, 00:15:01.490 "rdma_cm_event_timeout_ms": 0, 00:15:01.490 "dhchap_digests": [ 00:15:01.490 "sha256", 00:15:01.490 "sha384", 00:15:01.490 "sha512" 00:15:01.490 ], 00:15:01.490 "dhchap_dhgroups": [ 00:15:01.490 "null", 00:15:01.490 "ffdhe2048", 00:15:01.490 "ffdhe3072", 00:15:01.490 "ffdhe4096", 00:15:01.490 "ffdhe6144", 00:15:01.490 "ffdhe8192" 00:15:01.490 ] 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_nvme_set_hotplug", 00:15:01.490 "params": { 00:15:01.490 "period_us": 100000, 00:15:01.490 "enable": false 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_malloc_create", 00:15:01.490 "params": { 00:15:01.490 "name": "malloc0", 00:15:01.490 "num_blocks": 8192, 00:15:01.490 "block_size": 4096, 00:15:01.490 "physical_block_size": 4096, 00:15:01.490 "uuid": "d38b4350-b207-4c00-9c60-cf032e2bfe77", 00:15:01.490 "optimal_io_boundary": 0, 00:15:01.490 "md_size": 0, 00:15:01.490 "dif_type": 0, 00:15:01.490 "dif_is_head_of_md": false, 00:15:01.490 "dif_pi_format": 0 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_wait_for_examine" 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "nbd", 00:15:01.490 "config": [] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "scheduler", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "framework_set_scheduler", 00:15:01.490 "params": { 00:15:01.490 "name": "static" 00:15:01.490 } 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "nvmf", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "nvmf_set_config", 00:15:01.490 "params": { 00:15:01.490 "discovery_filter": "match_any", 00:15:01.490 "admin_cmd_passthru": { 00:15:01.490 "identify_ctrlr": false 00:15:01.490 } 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_set_max_subsystems", 00:15:01.490 "params": { 00:15:01.490 "max_subsystems": 1024 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_set_crdt", 00:15:01.490 "params": { 00:15:01.490 "crdt1": 0, 00:15:01.490 "crdt2": 0, 00:15:01.490 "crdt3": 0 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_create_transport", 00:15:01.490 "params": { 00:15:01.490 "trtype": "TCP", 00:15:01.490 "max_queue_depth": 128, 00:15:01.490 "max_io_qpairs_per_ctrlr": 127, 00:15:01.490 "in_capsule_data_size": 4096, 00:15:01.490 "max_io_size": 131072, 00:15:01.490 "io_unit_size": 131072, 00:15:01.490 "max_aq_depth": 128, 00:15:01.490 "num_shared_buffers": 511, 00:15:01.490 "buf_cache_size": 4294967295, 00:15:01.490 "dif_insert_or_strip": false, 00:15:01.490 "zcopy": false, 00:15:01.490 "c2h_success": false, 00:15:01.490 "sock_priority": 0, 00:15:01.490 "abort_timeout_sec": 1, 00:15:01.490 "ack_timeout": 0, 00:15:01.490 "data_wr_pool_size": 0 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_create_subsystem", 00:15:01.490 "params": { 00:15:01.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.490 "allow_any_host": false, 00:15:01.490 "serial_number": "SPDK00000000000001", 00:15:01.490 "model_number": "SPDK bdev Controller", 00:15:01.490 "max_namespaces": 10, 00:15:01.490 "min_cntlid": 1, 00:15:01.490 "max_cntlid": 65519, 00:15:01.490 "ana_reporting": false 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_subsystem_add_host", 00:15:01.490 "params": { 00:15:01.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.490 "host": "nqn.2016-06.io.spdk:host1", 00:15:01.490 "psk": "/tmp/tmp.GOCmR5ZfC5" 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_subsystem_add_ns", 00:15:01.490 "params": { 00:15:01.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.490 "namespace": { 00:15:01.490 "nsid": 1, 00:15:01.490 "bdev_name": "malloc0", 00:15:01.490 "nguid": "D38B4350B2074C009C60CF032E2BFE77", 00:15:01.490 "uuid": "d38b4350-b207-4c00-9c60-cf032e2bfe77", 00:15:01.490 "no_auto_visible": false 00:15:01.490 } 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_subsystem_add_listener", 00:15:01.490 "params": { 00:15:01.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.490 "listen_address": { 00:15:01.490 "trtype": "TCP", 00:15:01.490 "adrfam": "IPv4", 00:15:01.490 "traddr": "10.0.0.2", 00:15:01.490 "trsvcid": "4420" 00:15:01.490 }, 00:15:01.490 "secure_channel": true 00:15:01.490 } 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }' 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=87127 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 87127 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87127 ']' 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.490 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.490 [2024-07-23 04:10:54.797798] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:01.490 [2024-07-23 04:10:54.797875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.748 [2024-07-23 04:10:54.915527] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:01.749 [2024-07-23 04:10:54.926031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.749 [2024-07-23 04:10:54.987746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.749 [2024-07-23 04:10:54.987792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.749 [2024-07-23 04:10:54.987801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.749 [2024-07-23 04:10:54.987808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.749 [2024-07-23 04:10:54.987814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.749 [2024-07-23 04:10:54.987884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.007 [2024-07-23 04:10:55.151678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:02.007 [2024-07-23 04:10:55.213477] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.007 [2024-07-23 04:10:55.229416] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:02.007 [2024-07-23 04:10:55.245440] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.007 [2024-07-23 04:10:55.253144] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=87158 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 87158 /var/tmp/bdevperf.sock 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87158 ']' 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.575 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:02.575 "subsystems": [ 00:15:02.575 { 00:15:02.575 "subsystem": "keyring", 00:15:02.575 "config": [] 00:15:02.575 }, 00:15:02.575 { 00:15:02.575 "subsystem": "iobuf", 00:15:02.575 "config": [ 00:15:02.575 { 00:15:02.575 "method": "iobuf_set_options", 00:15:02.575 "params": { 00:15:02.575 "small_pool_count": 8192, 00:15:02.575 "large_pool_count": 1024, 00:15:02.575 "small_bufsize": 8192, 00:15:02.575 "large_bufsize": 135168 00:15:02.575 } 00:15:02.575 } 00:15:02.575 ] 00:15:02.575 }, 00:15:02.575 { 00:15:02.575 "subsystem": "sock", 00:15:02.575 "config": [ 00:15:02.575 { 00:15:02.575 "method": "sock_set_default_impl", 00:15:02.575 "params": { 00:15:02.575 "impl_name": "uring" 00:15:02.575 } 00:15:02.575 }, 00:15:02.575 { 00:15:02.575 "method": "sock_impl_set_options", 00:15:02.575 "params": { 00:15:02.575 "impl_name": "ssl", 00:15:02.575 "recv_buf_size": 4096, 00:15:02.575 "send_buf_size": 4096, 00:15:02.575 "enable_recv_pipe": true, 00:15:02.575 "enable_quickack": false, 00:15:02.575 "enable_placement_id": 0, 00:15:02.575 "enable_zerocopy_send_server": true, 00:15:02.575 "enable_zerocopy_send_client": false, 00:15:02.575 "zerocopy_threshold": 0, 00:15:02.575 "tls_version": 0, 00:15:02.575 "enable_ktls": false 00:15:02.575 } 00:15:02.575 }, 00:15:02.575 { 00:15:02.575 "method": "sock_impl_set_options", 00:15:02.575 "params": { 00:15:02.575 "impl_name": "posix", 00:15:02.575 "recv_buf_size": 2097152, 00:15:02.575 "send_buf_size": 2097152, 00:15:02.575 "enable_recv_pipe": true, 00:15:02.575 "enable_quickack": false, 00:15:02.575 "enable_placement_id": 0, 00:15:02.575 "enable_zerocopy_send_server": true, 00:15:02.575 "enable_zerocopy_send_client": false, 00:15:02.575 "zerocopy_threshold": 0, 00:15:02.575 "tls_version": 0, 00:15:02.575 "enable_ktls": false 00:15:02.575 } 00:15:02.575 }, 00:15:02.575 { 00:15:02.575 "method": "sock_impl_set_options", 00:15:02.575 "params": { 00:15:02.575 "impl_name": "uring", 00:15:02.575 "recv_buf_size": 2097152, 00:15:02.575 "send_buf_size": 2097152, 00:15:02.575 "enable_recv_pipe": true, 00:15:02.575 "enable_quickack": false, 00:15:02.575 "enable_placement_id": 0, 00:15:02.575 "enable_zerocopy_send_server": false, 00:15:02.575 "enable_zerocopy_send_client": false, 00:15:02.575 "zerocopy_threshold": 0, 00:15:02.575 "tls_version": 0, 00:15:02.575 "enable_ktls": false 00:15:02.575 } 00:15:02.575 } 00:15:02.575 ] 00:15:02.575 }, 00:15:02.575 { 00:15:02.575 "subsystem": "vmd", 00:15:02.575 "config": [] 00:15:02.575 }, 00:15:02.575 { 00:15:02.575 "subsystem": "accel", 00:15:02.575 "config": [ 00:15:02.575 { 00:15:02.575 "method": "accel_set_options", 00:15:02.575 "params": { 00:15:02.575 "small_cache_size": 128, 00:15:02.575 "large_cache_size": 16, 00:15:02.575 "task_count": 2048, 00:15:02.575 "sequence_count": 2048, 00:15:02.575 "buf_count": 2048 00:15:02.576 } 00:15:02.576 } 00:15:02.576 ] 00:15:02.576 }, 00:15:02.576 { 00:15:02.576 "subsystem": "bdev", 00:15:02.576 "config": [ 00:15:02.576 { 00:15:02.576 "method": "bdev_set_options", 00:15:02.576 "params": { 00:15:02.576 "bdev_io_pool_size": 65535, 00:15:02.576 "bdev_io_cache_size": 256, 00:15:02.576 "bdev_auto_examine": true, 00:15:02.576 "iobuf_small_cache_size": 128, 00:15:02.576 "iobuf_large_cache_size": 16 00:15:02.576 } 00:15:02.576 }, 00:15:02.576 { 00:15:02.576 "method": "bdev_raid_set_options", 00:15:02.576 "params": { 00:15:02.576 "process_window_size_kb": 1024, 00:15:02.576 "process_max_bandwidth_mb_sec": 0 00:15:02.576 } 00:15:02.576 }, 00:15:02.576 { 00:15:02.576 "method": "bdev_iscsi_set_options", 00:15:02.576 "params": { 00:15:02.576 "timeout_sec": 30 00:15:02.576 } 00:15:02.576 }, 00:15:02.576 { 00:15:02.576 "method": "bdev_nvme_set_options", 00:15:02.576 "params": { 00:15:02.576 "action_on_timeout": "none", 00:15:02.576 "timeout_us": 0, 00:15:02.576 "timeout_admin_us": 0, 00:15:02.576 "keep_alive_timeout_ms": 10000, 00:15:02.576 "arbitration_burst": 0, 00:15:02.576 "low_priority_weight": 0, 00:15:02.576 "medium_priority_weight": 0, 00:15:02.576 "high_priority_weight": 0, 00:15:02.576 "nvme_adminq_poll_period_us": 10000, 00:15:02.576 "nvme_ioq_poll_period_us": 0, 00:15:02.576 "io_queue_requests": 512, 00:15:02.576 "delay_cmd_submit": true, 00:15:02.576 "transport_retry_count": 4, 00:15:02.576 "bdev_retry_count": 3, 00:15:02.576 "transport_ack_timeout": 0, 00:15:02.576 "ctrlr_loss_timeout_sec": 0, 00:15:02.576 "reconnect_delay_sec": 0, 00:15:02.576 "fast_io_fail_timeout_sec": 0, 00:15:02.576 "disable_auto_failback": false, 00:15:02.576 "generate_uuids": false, 00:15:02.576 "transport_tos": 0, 00:15:02.576 "nvme_error_stat": false, 00:15:02.576 "rdma_srq_size": 0, 00:15:02.576 "io_path_stat": false, 00:15:02.576 "allow_accel_sequence": false, 00:15:02.576 "rdma_max_cq_size": 0, 00:15:02.576 "rdma_cm_event_timeout_ms": 0, 00:15:02.576 "dhchap_digests": [ 00:15:02.576 "sha256", 00:15:02.576 "sha384", 00:15:02.576 "sha512" 00:15:02.576 ], 00:15:02.576 "dhchap_dhgroups": [ 00:15:02.576 "null", 00:15:02.576 "ffdhe2048", 00:15:02.576 "ffdhe3072", 00:15:02.576 "ffdhe4096", 00:15:02.576 "ffdhe6144", 00:15:02.576 "ffdhe8192" 00:15:02.576 ] 00:15:02.576 } 00:15:02.576 }, 00:15:02.576 { 00:15:02.576 "method": "bdev_nvme_attach_controller", 00:15:02.576 "params": { 00:15:02.576 "name": "TLSTEST", 00:15:02.576 "trtype": "TCP", 00:15:02.576 "adrfam": "IPv4", 00:15:02.576 "traddr": "10.0.0.2", 00:15:02.576 "trsvcid": "4420", 00:15:02.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.576 "prchk_reftag": false, 00:15:02.576 "prchk_guard": false, 00:15:02.576 "ctrlr_loss_timeout_sec": 0, 00:15:02.576 "reconnect_delay_sec": 0, 00:15:02.576 "fast_io_fail_timeout_sec": 0, 00:15:02.576 "psk": "/tmp/tmp.GOCmR5ZfC5", 00:15:02.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.576 "hdgst": false, 00:15:02.576 "ddgst": false 00:15:02.576 } 00:15:02.576 }, 00:15:02.576 { 00:15:02.576 "method": "bdev_nvme_set_hotplug", 00:15:02.576 "params": { 00:15:02.576 "period_us": 100000, 00:15:02.576 "enable": false 00:15:02.576 } 00:15:02.576 }, 00:15:02.576 { 00:15:02.576 "method": "bdev_wait_for_examine" 00:15:02.576 } 00:15:02.576 ] 00:15:02.576 }, 00:15:02.576 { 00:15:02.576 "subsystem": "nbd", 00:15:02.576 "config": [] 00:15:02.576 } 00:15:02.576 ] 00:15:02.576 }' 00:15:02.576 [2024-07-23 04:10:55.782972] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:02.576 [2024-07-23 04:10:55.783066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87158 ] 00:15:02.576 [2024-07-23 04:10:55.905726] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:02.835 [2024-07-23 04:10:55.923594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.835 [2024-07-23 04:10:55.980022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.835 [2024-07-23 04:10:56.111325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:02.835 [2024-07-23 04:10:56.143672] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:02.835 [2024-07-23 04:10:56.143815] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:03.401 04:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.401 04:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:03.401 04:10:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:03.660 Running I/O for 10 seconds... 00:15:13.664 00:15:13.664 Latency(us) 00:15:13.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.664 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:13.664 Verification LBA range: start 0x0 length 0x2000 00:15:13.664 TLSTESTn1 : 10.03 4557.16 17.80 0.00 0.00 28037.58 6196.13 19065.02 00:15:13.664 =================================================================================================================== 00:15:13.664 Total : 4557.16 17.80 0.00 0.00 28037.58 6196.13 19065.02 00:15:13.664 0 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 87158 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87158 ']' 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87158 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87158 00:15:13.664 killing process with pid 87158 00:15:13.664 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.664 00:15:13.664 Latency(us) 00:15:13.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.664 =================================================================================================================== 00:15:13.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87158' 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87158 00:15:13.664 [2024-07-23 04:11:06.900856] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:13.664 04:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87158 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 87127 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87127 ']' 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87127 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87127 00:15:13.923 killing process with pid 87127 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87127' 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87127 00:15:13.923 [2024-07-23 04:11:07.123709] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:13.923 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87127 00:15:14.181 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:14.181 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.181 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.181 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.181 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:14.181 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=87298 00:15:14.181 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 87298 00:15:14.182 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87298 ']' 00:15:14.182 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.182 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.182 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.182 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.182 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.182 [2024-07-23 04:11:07.362530] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:14.182 [2024-07-23 04:11:07.362609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.182 [2024-07-23 04:11:07.478461] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:14.182 [2024-07-23 04:11:07.497683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.443 [2024-07-23 04:11:07.562703] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.443 [2024-07-23 04:11:07.563075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.443 [2024-07-23 04:11:07.563243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.443 [2024-07-23 04:11:07.563310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.443 [2024-07-23 04:11:07.563428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.443 [2024-07-23 04:11:07.563518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.443 [2024-07-23 04:11:07.619078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.GOCmR5ZfC5 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GOCmR5ZfC5 00:15:14.443 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:14.709 [2024-07-23 04:11:07.893576] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.709 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:14.967 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:15.226 [2024-07-23 04:11:08.353669] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:15.226 [2024-07-23 04:11:08.353866] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.226 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:15.226 malloc0 00:15:15.484 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:15.484 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GOCmR5ZfC5 00:15:15.743 [2024-07-23 04:11:08.931818] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:15.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=87334 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 87334 /var/tmp/bdevperf.sock 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87334 ']' 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.743 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.743 [2024-07-23 04:11:08.991468] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:15.743 [2024-07-23 04:11:08.991751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87334 ] 00:15:16.002 [2024-07-23 04:11:09.107881] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:16.002 [2024-07-23 04:11:09.123985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.002 [2024-07-23 04:11:09.200740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.002 [2024-07-23 04:11:09.253510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:16.568 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.568 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:16.568 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GOCmR5ZfC5 00:15:16.827 04:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:17.085 [2024-07-23 04:11:10.279091] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.085 nvme0n1 00:15:17.085 04:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:17.343 Running I/O for 1 seconds... 00:15:18.278 00:15:18.278 Latency(us) 00:15:18.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.278 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:18.278 Verification LBA range: start 0x0 length 0x2000 00:15:18.278 nvme0n1 : 1.02 4857.17 18.97 0.00 0.00 26061.87 6732.33 18230.92 00:15:18.278 =================================================================================================================== 00:15:18.278 Total : 4857.17 18.97 0.00 0.00 26061.87 6732.33 18230.92 00:15:18.278 0 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 87334 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87334 ']' 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87334 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87334 00:15:18.278 killing process with pid 87334 00:15:18.278 Received shutdown signal, test time was about 1.000000 seconds 00:15:18.278 00:15:18.278 Latency(us) 00:15:18.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.278 =================================================================================================================== 00:15:18.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87334' 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87334 00:15:18.278 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87334 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 87298 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87298 ']' 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87298 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87298 00:15:18.535 killing process with pid 87298 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87298' 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87298 00:15:18.535 [2024-07-23 04:11:11.757015] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:18.535 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87298 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=87385 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 87385 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87385 ']' 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.793 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.793 [2024-07-23 04:11:12.011234] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:18.793 [2024-07-23 04:11:12.011506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.793 [2024-07-23 04:11:12.134356] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:19.052 [2024-07-23 04:11:12.151375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.052 [2024-07-23 04:11:12.225662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.052 [2024-07-23 04:11:12.225709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.052 [2024-07-23 04:11:12.225720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.052 [2024-07-23 04:11:12.225727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.052 [2024-07-23 04:11:12.225733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.052 [2024-07-23 04:11:12.225756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.052 [2024-07-23 04:11:12.276141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:19.619 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.619 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:19.619 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.619 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.619 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.878 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.878 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:19.878 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.878 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.878 [2024-07-23 04:11:13.003197] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.878 malloc0 00:15:19.878 [2024-07-23 04:11:13.033646] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:19.878 [2024-07-23 04:11:13.033818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=87418 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 87418 /var/tmp/bdevperf.sock 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87418 ']' 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.878 04:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.878 [2024-07-23 04:11:13.106811] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:19.878 [2024-07-23 04:11:13.107132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87418 ] 00:15:20.135 [2024-07-23 04:11:13.223525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:20.135 [2024-07-23 04:11:13.243848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.135 [2024-07-23 04:11:13.315944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.135 [2024-07-23 04:11:13.372840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:20.703 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.703 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:20.703 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GOCmR5ZfC5 00:15:20.963 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:21.221 [2024-07-23 04:11:14.383215] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:21.221 nvme0n1 00:15:21.221 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:21.479 Running I/O for 1 seconds... 00:15:22.466 00:15:22.466 Latency(us) 00:15:22.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.466 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:22.466 Verification LBA range: start 0x0 length 0x2000 00:15:22.466 nvme0n1 : 1.01 4792.68 18.72 0.00 0.00 26454.76 6225.92 17515.99 00:15:22.466 =================================================================================================================== 00:15:22.466 Total : 4792.68 18.72 0.00 0.00 26454.76 6225.92 17515.99 00:15:22.466 0 00:15:22.466 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:15:22.466 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.466 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.466 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.466 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:15:22.466 "subsystems": [ 00:15:22.466 { 00:15:22.466 "subsystem": "keyring", 00:15:22.466 "config": [ 00:15:22.466 { 00:15:22.466 "method": "keyring_file_add_key", 00:15:22.466 "params": { 00:15:22.466 "name": "key0", 00:15:22.466 "path": "/tmp/tmp.GOCmR5ZfC5" 00:15:22.466 } 00:15:22.466 } 00:15:22.466 ] 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "subsystem": "iobuf", 00:15:22.466 "config": [ 00:15:22.466 { 00:15:22.466 "method": "iobuf_set_options", 00:15:22.466 "params": { 00:15:22.466 "small_pool_count": 8192, 00:15:22.466 "large_pool_count": 1024, 00:15:22.466 "small_bufsize": 8192, 00:15:22.466 "large_bufsize": 135168 00:15:22.466 } 00:15:22.466 } 00:15:22.466 ] 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "subsystem": "sock", 00:15:22.466 "config": [ 00:15:22.466 { 00:15:22.466 "method": "sock_set_default_impl", 00:15:22.466 "params": { 00:15:22.466 "impl_name": "uring" 00:15:22.466 } 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "method": "sock_impl_set_options", 00:15:22.466 "params": { 00:15:22.466 "impl_name": "ssl", 00:15:22.466 "recv_buf_size": 4096, 00:15:22.466 "send_buf_size": 4096, 00:15:22.466 "enable_recv_pipe": true, 00:15:22.466 "enable_quickack": false, 00:15:22.466 "enable_placement_id": 0, 00:15:22.466 "enable_zerocopy_send_server": true, 00:15:22.466 "enable_zerocopy_send_client": false, 00:15:22.466 "zerocopy_threshold": 0, 00:15:22.466 "tls_version": 0, 00:15:22.466 "enable_ktls": false 00:15:22.466 } 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "method": "sock_impl_set_options", 00:15:22.466 "params": { 00:15:22.466 "impl_name": "posix", 00:15:22.466 "recv_buf_size": 2097152, 00:15:22.466 "send_buf_size": 2097152, 00:15:22.466 "enable_recv_pipe": true, 00:15:22.466 "enable_quickack": false, 00:15:22.466 "enable_placement_id": 0, 00:15:22.466 "enable_zerocopy_send_server": true, 00:15:22.466 "enable_zerocopy_send_client": false, 00:15:22.466 "zerocopy_threshold": 0, 00:15:22.466 "tls_version": 0, 00:15:22.466 "enable_ktls": false 00:15:22.466 } 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "method": "sock_impl_set_options", 00:15:22.466 "params": { 00:15:22.466 "impl_name": "uring", 00:15:22.466 "recv_buf_size": 2097152, 00:15:22.466 "send_buf_size": 2097152, 00:15:22.466 "enable_recv_pipe": true, 00:15:22.466 "enable_quickack": false, 00:15:22.466 "enable_placement_id": 0, 00:15:22.466 "enable_zerocopy_send_server": false, 00:15:22.466 "enable_zerocopy_send_client": false, 00:15:22.466 "zerocopy_threshold": 0, 00:15:22.466 "tls_version": 0, 00:15:22.466 "enable_ktls": false 00:15:22.466 } 00:15:22.466 } 00:15:22.466 ] 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "subsystem": "vmd", 00:15:22.466 "config": [] 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "subsystem": "accel", 00:15:22.466 "config": [ 00:15:22.466 { 00:15:22.466 "method": "accel_set_options", 00:15:22.466 "params": { 00:15:22.466 "small_cache_size": 128, 00:15:22.466 "large_cache_size": 16, 00:15:22.466 "task_count": 2048, 00:15:22.466 "sequence_count": 2048, 00:15:22.466 "buf_count": 2048 00:15:22.466 } 00:15:22.466 } 00:15:22.466 ] 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "subsystem": "bdev", 00:15:22.466 "config": [ 00:15:22.466 { 00:15:22.466 "method": "bdev_set_options", 00:15:22.466 "params": { 00:15:22.466 "bdev_io_pool_size": 65535, 00:15:22.466 "bdev_io_cache_size": 256, 00:15:22.466 "bdev_auto_examine": true, 00:15:22.466 "iobuf_small_cache_size": 128, 00:15:22.466 "iobuf_large_cache_size": 16 00:15:22.466 } 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "method": "bdev_raid_set_options", 00:15:22.466 "params": { 00:15:22.466 "process_window_size_kb": 1024, 00:15:22.466 "process_max_bandwidth_mb_sec": 0 00:15:22.466 } 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "method": "bdev_iscsi_set_options", 00:15:22.466 "params": { 00:15:22.466 "timeout_sec": 30 00:15:22.466 } 00:15:22.466 }, 00:15:22.466 { 00:15:22.466 "method": "bdev_nvme_set_options", 00:15:22.466 "params": { 00:15:22.466 "action_on_timeout": "none", 00:15:22.466 "timeout_us": 0, 00:15:22.466 "timeout_admin_us": 0, 00:15:22.466 "keep_alive_timeout_ms": 10000, 00:15:22.466 "arbitration_burst": 0, 00:15:22.466 "low_priority_weight": 0, 00:15:22.466 "medium_priority_weight": 0, 00:15:22.466 "high_priority_weight": 0, 00:15:22.466 "nvme_adminq_poll_period_us": 10000, 00:15:22.466 "nvme_ioq_poll_period_us": 0, 00:15:22.466 "io_queue_requests": 0, 00:15:22.466 "delay_cmd_submit": true, 00:15:22.466 "transport_retry_count": 4, 00:15:22.466 "bdev_retry_count": 3, 00:15:22.466 "transport_ack_timeout": 0, 00:15:22.466 "ctrlr_loss_timeout_sec": 0, 00:15:22.466 "reconnect_delay_sec": 0, 00:15:22.466 "fast_io_fail_timeout_sec": 0, 00:15:22.466 "disable_auto_failback": false, 00:15:22.466 "generate_uuids": false, 00:15:22.466 "transport_tos": 0, 00:15:22.466 "nvme_error_stat": false, 00:15:22.466 "rdma_srq_size": 0, 00:15:22.466 "io_path_stat": false, 00:15:22.466 "allow_accel_sequence": false, 00:15:22.466 "rdma_max_cq_size": 0, 00:15:22.466 "rdma_cm_event_timeout_ms": 0, 00:15:22.466 "dhchap_digests": [ 00:15:22.466 "sha256", 00:15:22.466 "sha384", 00:15:22.466 "sha512" 00:15:22.466 ], 00:15:22.466 "dhchap_dhgroups": [ 00:15:22.466 "null", 00:15:22.466 "ffdhe2048", 00:15:22.467 "ffdhe3072", 00:15:22.467 "ffdhe4096", 00:15:22.467 "ffdhe6144", 00:15:22.467 "ffdhe8192" 00:15:22.467 ] 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "bdev_nvme_set_hotplug", 00:15:22.467 "params": { 00:15:22.467 "period_us": 100000, 00:15:22.467 "enable": false 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "bdev_malloc_create", 00:15:22.467 "params": { 00:15:22.467 "name": "malloc0", 00:15:22.467 "num_blocks": 8192, 00:15:22.467 "block_size": 4096, 00:15:22.467 "physical_block_size": 4096, 00:15:22.467 "uuid": "8ffddfec-b496-4ae3-851b-793baeaf6faa", 00:15:22.467 "optimal_io_boundary": 0, 00:15:22.467 "md_size": 0, 00:15:22.467 "dif_type": 0, 00:15:22.467 "dif_is_head_of_md": false, 00:15:22.467 "dif_pi_format": 0 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "bdev_wait_for_examine" 00:15:22.467 } 00:15:22.467 ] 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "subsystem": "nbd", 00:15:22.467 "config": [] 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "subsystem": "scheduler", 00:15:22.467 "config": [ 00:15:22.467 { 00:15:22.467 "method": "framework_set_scheduler", 00:15:22.467 "params": { 00:15:22.467 "name": "static" 00:15:22.467 } 00:15:22.467 } 00:15:22.467 ] 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "subsystem": "nvmf", 00:15:22.467 "config": [ 00:15:22.467 { 00:15:22.467 "method": "nvmf_set_config", 00:15:22.467 "params": { 00:15:22.467 "discovery_filter": "match_any", 00:15:22.467 "admin_cmd_passthru": { 00:15:22.467 "identify_ctrlr": false 00:15:22.467 } 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "nvmf_set_max_subsystems", 00:15:22.467 "params": { 00:15:22.467 "max_subsystems": 1024 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "nvmf_set_crdt", 00:15:22.467 "params": { 00:15:22.467 "crdt1": 0, 00:15:22.467 "crdt2": 0, 00:15:22.467 "crdt3": 0 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "nvmf_create_transport", 00:15:22.467 "params": { 00:15:22.467 "trtype": "TCP", 00:15:22.467 "max_queue_depth": 128, 00:15:22.467 "max_io_qpairs_per_ctrlr": 127, 00:15:22.467 "in_capsule_data_size": 4096, 00:15:22.467 "max_io_size": 131072, 00:15:22.467 "io_unit_size": 131072, 00:15:22.467 "max_aq_depth": 128, 00:15:22.467 "num_shared_buffers": 511, 00:15:22.467 "buf_cache_size": 4294967295, 00:15:22.467 "dif_insert_or_strip": false, 00:15:22.467 "zcopy": false, 00:15:22.467 "c2h_success": false, 00:15:22.467 "sock_priority": 0, 00:15:22.467 "abort_timeout_sec": 1, 00:15:22.467 "ack_timeout": 0, 00:15:22.467 "data_wr_pool_size": 0 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "nvmf_create_subsystem", 00:15:22.467 "params": { 00:15:22.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.467 "allow_any_host": false, 00:15:22.467 "serial_number": "00000000000000000000", 00:15:22.467 "model_number": "SPDK bdev Controller", 00:15:22.467 "max_namespaces": 32, 00:15:22.467 "min_cntlid": 1, 00:15:22.467 "max_cntlid": 65519, 00:15:22.467 "ana_reporting": false 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "nvmf_subsystem_add_host", 00:15:22.467 "params": { 00:15:22.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.467 "host": "nqn.2016-06.io.spdk:host1", 00:15:22.467 "psk": "key0" 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "nvmf_subsystem_add_ns", 00:15:22.467 "params": { 00:15:22.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.467 "namespace": { 00:15:22.467 "nsid": 1, 00:15:22.467 "bdev_name": "malloc0", 00:15:22.467 "nguid": "8FFDDFECB4964AE3851B793BAEAF6FAA", 00:15:22.467 "uuid": "8ffddfec-b496-4ae3-851b-793baeaf6faa", 00:15:22.467 "no_auto_visible": false 00:15:22.467 } 00:15:22.467 } 00:15:22.467 }, 00:15:22.467 { 00:15:22.467 "method": "nvmf_subsystem_add_listener", 00:15:22.467 "params": { 00:15:22.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.467 "listen_address": { 00:15:22.467 "trtype": "TCP", 00:15:22.467 "adrfam": "IPv4", 00:15:22.467 "traddr": "10.0.0.2", 00:15:22.467 "trsvcid": "4420" 00:15:22.467 }, 00:15:22.467 "secure_channel": false, 00:15:22.467 "sock_impl": "ssl" 00:15:22.467 } 00:15:22.467 } 00:15:22.467 ] 00:15:22.467 } 00:15:22.467 ] 00:15:22.467 }' 00:15:22.467 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:23.034 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:15:23.034 "subsystems": [ 00:15:23.034 { 00:15:23.034 "subsystem": "keyring", 00:15:23.034 "config": [ 00:15:23.034 { 00:15:23.034 "method": "keyring_file_add_key", 00:15:23.034 "params": { 00:15:23.034 "name": "key0", 00:15:23.034 "path": "/tmp/tmp.GOCmR5ZfC5" 00:15:23.034 } 00:15:23.034 } 00:15:23.034 ] 00:15:23.034 }, 00:15:23.034 { 00:15:23.034 "subsystem": "iobuf", 00:15:23.034 "config": [ 00:15:23.034 { 00:15:23.034 "method": "iobuf_set_options", 00:15:23.034 "params": { 00:15:23.034 "small_pool_count": 8192, 00:15:23.034 "large_pool_count": 1024, 00:15:23.034 "small_bufsize": 8192, 00:15:23.034 "large_bufsize": 135168 00:15:23.034 } 00:15:23.035 } 00:15:23.035 ] 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "subsystem": "sock", 00:15:23.035 "config": [ 00:15:23.035 { 00:15:23.035 "method": "sock_set_default_impl", 00:15:23.035 "params": { 00:15:23.035 "impl_name": "uring" 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "sock_impl_set_options", 00:15:23.035 "params": { 00:15:23.035 "impl_name": "ssl", 00:15:23.035 "recv_buf_size": 4096, 00:15:23.035 "send_buf_size": 4096, 00:15:23.035 "enable_recv_pipe": true, 00:15:23.035 "enable_quickack": false, 00:15:23.035 "enable_placement_id": 0, 00:15:23.035 "enable_zerocopy_send_server": true, 00:15:23.035 "enable_zerocopy_send_client": false, 00:15:23.035 "zerocopy_threshold": 0, 00:15:23.035 "tls_version": 0, 00:15:23.035 "enable_ktls": false 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "sock_impl_set_options", 00:15:23.035 "params": { 00:15:23.035 "impl_name": "posix", 00:15:23.035 "recv_buf_size": 2097152, 00:15:23.035 "send_buf_size": 2097152, 00:15:23.035 "enable_recv_pipe": true, 00:15:23.035 "enable_quickack": false, 00:15:23.035 "enable_placement_id": 0, 00:15:23.035 "enable_zerocopy_send_server": true, 00:15:23.035 "enable_zerocopy_send_client": false, 00:15:23.035 "zerocopy_threshold": 0, 00:15:23.035 "tls_version": 0, 00:15:23.035 "enable_ktls": false 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "sock_impl_set_options", 00:15:23.035 "params": { 00:15:23.035 "impl_name": "uring", 00:15:23.035 "recv_buf_size": 2097152, 00:15:23.035 "send_buf_size": 2097152, 00:15:23.035 "enable_recv_pipe": true, 00:15:23.035 "enable_quickack": false, 00:15:23.035 "enable_placement_id": 0, 00:15:23.035 "enable_zerocopy_send_server": false, 00:15:23.035 "enable_zerocopy_send_client": false, 00:15:23.035 "zerocopy_threshold": 0, 00:15:23.035 "tls_version": 0, 00:15:23.035 "enable_ktls": false 00:15:23.035 } 00:15:23.035 } 00:15:23.035 ] 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "subsystem": "vmd", 00:15:23.035 "config": [] 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "subsystem": "accel", 00:15:23.035 "config": [ 00:15:23.035 { 00:15:23.035 "method": "accel_set_options", 00:15:23.035 "params": { 00:15:23.035 "small_cache_size": 128, 00:15:23.035 "large_cache_size": 16, 00:15:23.035 "task_count": 2048, 00:15:23.035 "sequence_count": 2048, 00:15:23.035 "buf_count": 2048 00:15:23.035 } 00:15:23.035 } 00:15:23.035 ] 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "subsystem": "bdev", 00:15:23.035 "config": [ 00:15:23.035 { 00:15:23.035 "method": "bdev_set_options", 00:15:23.035 "params": { 00:15:23.035 "bdev_io_pool_size": 65535, 00:15:23.035 "bdev_io_cache_size": 256, 00:15:23.035 "bdev_auto_examine": true, 00:15:23.035 "iobuf_small_cache_size": 128, 00:15:23.035 "iobuf_large_cache_size": 16 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "bdev_raid_set_options", 00:15:23.035 "params": { 00:15:23.035 "process_window_size_kb": 1024, 00:15:23.035 "process_max_bandwidth_mb_sec": 0 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "bdev_iscsi_set_options", 00:15:23.035 "params": { 00:15:23.035 "timeout_sec": 30 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "bdev_nvme_set_options", 00:15:23.035 "params": { 00:15:23.035 "action_on_timeout": "none", 00:15:23.035 "timeout_us": 0, 00:15:23.035 "timeout_admin_us": 0, 00:15:23.035 "keep_alive_timeout_ms": 10000, 00:15:23.035 "arbitration_burst": 0, 00:15:23.035 "low_priority_weight": 0, 00:15:23.035 "medium_priority_weight": 0, 00:15:23.035 "high_priority_weight": 0, 00:15:23.035 "nvme_adminq_poll_period_us": 10000, 00:15:23.035 "nvme_ioq_poll_period_us": 0, 00:15:23.035 "io_queue_requests": 512, 00:15:23.035 "delay_cmd_submit": true, 00:15:23.035 "transport_retry_count": 4, 00:15:23.035 "bdev_retry_count": 3, 00:15:23.035 "transport_ack_timeout": 0, 00:15:23.035 "ctrlr_loss_timeout_sec": 0, 00:15:23.035 "reconnect_delay_sec": 0, 00:15:23.035 "fast_io_fail_timeout_sec": 0, 00:15:23.035 "disable_auto_failback": false, 00:15:23.035 "generate_uuids": false, 00:15:23.035 "transport_tos": 0, 00:15:23.035 "nvme_error_stat": false, 00:15:23.035 "rdma_srq_size": 0, 00:15:23.035 "io_path_stat": false, 00:15:23.035 "allow_accel_sequence": false, 00:15:23.035 "rdma_max_cq_size": 0, 00:15:23.035 "rdma_cm_event_timeout_ms": 0, 00:15:23.035 "dhchap_digests": [ 00:15:23.035 "sha256", 00:15:23.035 "sha384", 00:15:23.035 "sha512" 00:15:23.035 ], 00:15:23.035 "dhchap_dhgroups": [ 00:15:23.035 "null", 00:15:23.035 "ffdhe2048", 00:15:23.035 "ffdhe3072", 00:15:23.035 "ffdhe4096", 00:15:23.035 "ffdhe6144", 00:15:23.035 "ffdhe8192" 00:15:23.035 ] 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "bdev_nvme_attach_controller", 00:15:23.035 "params": { 00:15:23.035 "name": "nvme0", 00:15:23.035 "trtype": "TCP", 00:15:23.035 "adrfam": "IPv4", 00:15:23.035 "traddr": "10.0.0.2", 00:15:23.035 "trsvcid": "4420", 00:15:23.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.035 "prchk_reftag": false, 00:15:23.035 "prchk_guard": false, 00:15:23.035 "ctrlr_loss_timeout_sec": 0, 00:15:23.035 "reconnect_delay_sec": 0, 00:15:23.035 "fast_io_fail_timeout_sec": 0, 00:15:23.035 "psk": "key0", 00:15:23.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.035 "hdgst": false, 00:15:23.035 "ddgst": false 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "bdev_nvme_set_hotplug", 00:15:23.035 "params": { 00:15:23.035 "period_us": 100000, 00:15:23.035 "enable": false 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "bdev_enable_histogram", 00:15:23.035 "params": { 00:15:23.035 "name": "nvme0n1", 00:15:23.035 "enable": true 00:15:23.035 } 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "method": "bdev_wait_for_examine" 00:15:23.035 } 00:15:23.035 ] 00:15:23.035 }, 00:15:23.035 { 00:15:23.035 "subsystem": "nbd", 00:15:23.035 "config": [] 00:15:23.035 } 00:15:23.035 ] 00:15:23.035 }' 00:15:23.035 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 87418 00:15:23.035 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87418 ']' 00:15:23.035 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87418 00:15:23.035 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:23.035 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.035 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87418 00:15:23.035 killing process with pid 87418 00:15:23.035 Received shutdown signal, test time was about 1.000000 seconds 00:15:23.035 00:15:23.035 Latency(us) 00:15:23.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.035 =================================================================================================================== 00:15:23.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.035 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:23.035 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87418' 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87418 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87418 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 87385 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87385 ']' 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87385 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87385 00:15:23.036 killing process with pid 87385 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87385' 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87385 00:15:23.036 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87385 00:15:23.295 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:15:23.295 "subsystems": [ 00:15:23.295 { 00:15:23.295 "subsystem": "keyring", 00:15:23.295 "config": [ 00:15:23.295 { 00:15:23.295 "method": "keyring_file_add_key", 00:15:23.295 "params": { 00:15:23.295 "name": "key0", 00:15:23.295 "path": "/tmp/tmp.GOCmR5ZfC5" 00:15:23.295 } 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "subsystem": "iobuf", 00:15:23.295 "config": [ 00:15:23.295 { 00:15:23.295 "method": "iobuf_set_options", 00:15:23.295 "params": { 00:15:23.295 "small_pool_count": 8192, 00:15:23.295 "large_pool_count": 1024, 00:15:23.295 "small_bufsize": 8192, 00:15:23.295 "large_bufsize": 135168 00:15:23.295 } 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "subsystem": "sock", 00:15:23.295 "config": [ 00:15:23.295 { 00:15:23.295 "method": "sock_set_default_impl", 00:15:23.295 "params": { 00:15:23.295 "impl_name": "uring" 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "sock_impl_set_options", 00:15:23.295 "params": { 00:15:23.295 "impl_name": "ssl", 00:15:23.295 "recv_buf_size": 4096, 00:15:23.295 "send_buf_size": 4096, 00:15:23.295 "enable_recv_pipe": true, 00:15:23.295 "enable_quickack": false, 00:15:23.295 "enable_placement_id": 0, 00:15:23.295 "enable_zerocopy_send_server": true, 00:15:23.295 "enable_zerocopy_send_client": false, 00:15:23.295 "zerocopy_threshold": 0, 00:15:23.295 "tls_version": 0, 00:15:23.295 "enable_ktls": false 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "sock_impl_set_options", 00:15:23.295 "params": { 00:15:23.295 "impl_name": "posix", 00:15:23.295 "recv_buf_size": 2097152, 00:15:23.295 "send_buf_size": 2097152, 00:15:23.295 "enable_recv_pipe": true, 00:15:23.295 "enable_quickack": false, 00:15:23.295 "enable_placement_id": 0, 00:15:23.295 "enable_zerocopy_send_server": true, 00:15:23.295 "enable_zerocopy_send_client": false, 00:15:23.295 "zerocopy_threshold": 0, 00:15:23.295 "tls_version": 0, 00:15:23.295 "enable_ktls": false 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "sock_impl_set_options", 00:15:23.295 "params": { 00:15:23.295 "impl_name": "uring", 00:15:23.295 "recv_buf_size": 2097152, 00:15:23.295 "send_buf_size": 2097152, 00:15:23.295 "enable_recv_pipe": true, 00:15:23.295 "enable_quickack": false, 00:15:23.295 "enable_placement_id": 0, 00:15:23.295 "enable_zerocopy_send_server": false, 00:15:23.295 "enable_zerocopy_send_client": false, 00:15:23.295 "zerocopy_threshold": 0, 00:15:23.295 "tls_version": 0, 00:15:23.295 "enable_ktls": false 00:15:23.295 } 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "subsystem": "vmd", 00:15:23.295 "config": [] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "subsystem": "accel", 00:15:23.295 "config": [ 00:15:23.295 { 00:15:23.295 "method": "accel_set_options", 00:15:23.295 "params": { 00:15:23.295 "small_cache_size": 128, 00:15:23.295 "large_cache_size": 16, 00:15:23.295 "task_count": 2048, 00:15:23.295 "sequence_count": 2048, 00:15:23.295 "buf_count": 2048 00:15:23.295 } 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "subsystem": "bdev", 00:15:23.295 "config": [ 00:15:23.295 { 00:15:23.295 "method": "bdev_set_options", 00:15:23.295 "params": { 00:15:23.295 "bdev_io_pool_size": 65535, 00:15:23.295 "bdev_io_cache_size": 256, 00:15:23.295 "bdev_auto_examine": true, 00:15:23.295 "iobuf_small_cache_size": 128, 00:15:23.295 "iobuf_large_cache_size": 16 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "bdev_raid_set_options", 00:15:23.295 "params": { 00:15:23.295 "process_window_size_kb": 1024, 00:15:23.295 "process_max_bandwidth_mb_sec": 0 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "bdev_iscsi_set_options", 00:15:23.295 "params": { 00:15:23.295 "timeout_sec": 30 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "bdev_nvme_set_options", 00:15:23.295 "params": { 00:15:23.295 "action_on_timeout": "none", 00:15:23.295 "timeout_us": 0, 00:15:23.295 "timeout_admin_us": 0, 00:15:23.295 "keep_alive_timeout_ms": 10000, 00:15:23.295 "arbitration_burst": 0, 00:15:23.295 "low_priority_weight": 0, 00:15:23.295 "medium_priority_weight": 0, 00:15:23.295 "high_priority_weight": 0, 00:15:23.295 "nvme_adminq_poll_period_us": 10000, 00:15:23.295 "nvme_ioq_poll_period_us": 0, 00:15:23.295 "io_queue_requests": 0, 00:15:23.295 "delay_cmd_submit": true, 00:15:23.295 "transport_retry_count": 4, 00:15:23.295 "bdev_retry_count": 3, 00:15:23.295 "transport_ack_timeout": 0, 00:15:23.295 "ctrlr_loss_timeout_sec": 0, 00:15:23.295 "reconnect_delay_sec": 0, 00:15:23.295 "fast_io_fail_timeout_sec": 0, 00:15:23.295 "disable_auto_failback": false, 00:15:23.295 "generate_uuids": false, 00:15:23.295 "transport_tos": 0, 00:15:23.295 "nvme_error_stat": false, 00:15:23.295 "rdma_srq_size": 0, 00:15:23.295 "io_path_stat": false, 00:15:23.295 "allow_accel_sequence": false, 00:15:23.295 "rdma_max_cq_size": 0, 00:15:23.295 "rdma_cm_event_timeout_ms": 0, 00:15:23.295 "dhchap_digests": [ 00:15:23.295 "sha256", 00:15:23.295 "sha384", 00:15:23.295 "sha512" 00:15:23.295 ], 00:15:23.295 "dhchap_dhgroups": [ 00:15:23.295 "null", 00:15:23.295 "ffdhe2048", 00:15:23.295 "ffdhe3072", 00:15:23.295 "ffdhe4096", 00:15:23.295 "ffdhe6144", 00:15:23.295 "ffdhe8192" 00:15:23.295 ] 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "bdev_nvme_set_hotplug", 00:15:23.295 "params": { 00:15:23.295 "period_us": 100000, 00:15:23.295 "enable": false 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "bdev_malloc_create", 00:15:23.295 "params": { 00:15:23.295 "name": "malloc0", 00:15:23.295 "num_blocks": 8192, 00:15:23.295 "block_size": 4096, 00:15:23.295 "physical_block_size": 4096, 00:15:23.295 "uuid": "8ffddfec-b496-4ae3-851b-793baeaf6faa", 00:15:23.295 "optimal_io_boundary": 0, 00:15:23.295 "md_size": 0, 00:15:23.295 "dif_type": 0, 00:15:23.295 "dif_is_head_of_md": false, 00:15:23.295 "dif_pi_format": 0 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "bdev_wait_for_examine" 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "subsystem": "nbd", 00:15:23.295 "config": [] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "subsystem": "scheduler", 00:15:23.295 "config": [ 00:15:23.295 { 00:15:23.295 "method": "framework_set_scheduler", 00:15:23.295 "params": { 00:15:23.295 "name": "static" 00:15:23.295 } 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "subsystem": "nvmf", 00:15:23.295 "config": [ 00:15:23.295 { 00:15:23.295 "method": "nvmf_set_config", 00:15:23.295 "params": { 00:15:23.295 "discovery_filter": "match_any", 00:15:23.295 "admin_cmd_passthru": { 00:15:23.295 "identify_ctrlr": false 00:15:23.295 } 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "nvmf_set_max_subsystems", 00:15:23.295 "params": { 00:15:23.295 "max_subsystems": 1024 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "nvmf_set_crdt", 00:15:23.295 "params": { 00:15:23.295 "crdt1": 0, 00:15:23.295 "crdt2": 0, 00:15:23.295 "crdt3": 0 00:15:23.295 } 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "method": "nvmf_create_transport", 00:15:23.295 "params 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:15:23.295 ": { 00:15:23.295 "trtype": "TCP", 00:15:23.295 "max_queue_depth": 128, 00:15:23.295 "max_io_qpairs_per_ctrlr": 127, 00:15:23.295 "in_capsule_data_size": 4096, 00:15:23.295 "max_io_size": 131072, 00:15:23.295 "io_unit_size": 131072, 00:15:23.295 "max_aq_depth": 128, 00:15:23.295 "num_shared_buffers": 511, 00:15:23.295 "buf_cache_size": 4294967295, 00:15:23.295 "dif_insert_or_strip": false, 00:15:23.295 "zcopy": false, 00:15:23.295 "c2h_success": false, 00:15:23.295 "sock_priority": 0, 00:15:23.295 "abort_timeout_sec": 1, 00:15:23.295 "ack_timeout": 0, 00:15:23.295 "data_wr_pool_size": 0 00:15:23.296 } 00:15:23.296 }, 00:15:23.296 { 00:15:23.296 "method": "nvmf_create_subsystem", 00:15:23.296 "params": { 00:15:23.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.296 "allow_any_host": false, 00:15:23.296 "serial_number": "00000000000000000000", 00:15:23.296 "model_number": "SPDK bdev Controller", 00:15:23.296 "max_namespaces": 32, 00:15:23.296 "min_cntlid": 1, 00:15:23.296 "max_cntlid": 65519, 00:15:23.296 "ana_reporting": false 00:15:23.296 } 00:15:23.296 }, 00:15:23.296 { 00:15:23.296 "method": "nvmf_subsystem_add_host", 00:15:23.296 "params": { 00:15:23.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.296 "host": "nqn.2016-06.io.spdk:host1", 00:15:23.296 "psk": "key0" 00:15:23.296 } 00:15:23.296 }, 00:15:23.296 { 00:15:23.296 "method": "nvmf_subsystem_add_ns", 00:15:23.296 "params": { 00:15:23.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.296 "namespace": { 00:15:23.296 "nsid": 1, 00:15:23.296 "bdev_name": "malloc0", 00:15:23.296 "nguid": "8FFDDFECB4964AE3851B793BAEAF6FAA", 00:15:23.296 "uuid": "8ffddfec-b496-4ae3-851b-793baeaf6faa", 00:15:23.296 "no_auto_visible": false 00:15:23.296 } 00:15:23.296 } 00:15:23.296 }, 00:15:23.296 { 00:15:23.296 "method": "nvmf_subsystem_add_listener", 00:15:23.296 "params": { 00:15:23.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.296 "listen_address": { 00:15:23.296 "trtype": "TCP", 00:15:23.296 "adrfam": "IPv4", 00:15:23.296 "traddr": "10.0.0.2", 00:15:23.296 "trsvcid": "4420" 00:15:23.296 }, 00:15:23.296 "secure_channel": false, 00:15:23.296 "sock_impl": "ssl" 00:15:23.296 } 00:15:23.296 } 00:15:23.296 ] 00:15:23.296 } 00:15:23.296 ] 00:15:23.296 }' 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=87479 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 87479 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87479 ']' 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.296 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.296 [2024-07-23 04:11:16.578343] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:23.296 [2024-07-23 04:11:16.578798] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.554 [2024-07-23 04:11:16.701663] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:23.554 [2024-07-23 04:11:16.716193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.554 [2024-07-23 04:11:16.769311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.554 [2024-07-23 04:11:16.769375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.554 [2024-07-23 04:11:16.769401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.554 [2024-07-23 04:11:16.769408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.554 [2024-07-23 04:11:16.769414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.554 [2024-07-23 04:11:16.769489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.812 [2024-07-23 04:11:16.931870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:23.812 [2024-07-23 04:11:17.000797] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.812 [2024-07-23 04:11:17.032753] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:23.812 [2024-07-23 04:11:17.046108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=87511 00:15:24.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 87511 /var/tmp/bdevperf.sock 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87511 ']' 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:24.379 04:11:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:15:24.379 "subsystems": [ 00:15:24.379 { 00:15:24.379 "subsystem": "keyring", 00:15:24.379 "config": [ 00:15:24.379 { 00:15:24.379 "method": "keyring_file_add_key", 00:15:24.379 "params": { 00:15:24.379 "name": "key0", 00:15:24.379 "path": "/tmp/tmp.GOCmR5ZfC5" 00:15:24.379 } 00:15:24.379 } 00:15:24.379 ] 00:15:24.379 }, 00:15:24.379 { 00:15:24.379 "subsystem": "iobuf", 00:15:24.379 "config": [ 00:15:24.379 { 00:15:24.379 "method": "iobuf_set_options", 00:15:24.379 "params": { 00:15:24.379 "small_pool_count": 8192, 00:15:24.379 "large_pool_count": 1024, 00:15:24.379 "small_bufsize": 8192, 00:15:24.379 "large_bufsize": 135168 00:15:24.379 } 00:15:24.379 } 00:15:24.379 ] 00:15:24.379 }, 00:15:24.379 { 00:15:24.379 "subsystem": "sock", 00:15:24.379 "config": [ 00:15:24.379 { 00:15:24.379 "method": "sock_set_default_impl", 00:15:24.379 "params": { 00:15:24.379 "impl_name": "uring" 00:15:24.379 } 00:15:24.379 }, 00:15:24.379 { 00:15:24.379 "method": "sock_impl_set_options", 00:15:24.379 "params": { 00:15:24.379 "impl_name": "ssl", 00:15:24.379 "recv_buf_size": 4096, 00:15:24.379 "send_buf_size": 4096, 00:15:24.379 "enable_recv_pipe": true, 00:15:24.379 "enable_quickack": false, 00:15:24.379 "enable_placement_id": 0, 00:15:24.379 "enable_zerocopy_send_server": true, 00:15:24.379 "enable_zerocopy_send_client": false, 00:15:24.379 "zerocopy_threshold": 0, 00:15:24.379 "tls_version": 0, 00:15:24.379 "enable_ktls": false 00:15:24.379 } 00:15:24.379 }, 00:15:24.379 { 00:15:24.379 "method": "sock_impl_set_options", 00:15:24.379 "params": { 00:15:24.379 "impl_name": "posix", 00:15:24.379 "recv_buf_size": 2097152, 00:15:24.379 "send_buf_size": 2097152, 00:15:24.379 "enable_recv_pipe": true, 00:15:24.379 "enable_quickack": false, 00:15:24.379 "enable_placement_id": 0, 00:15:24.379 "enable_zerocopy_send_server": true, 00:15:24.379 "enable_zerocopy_send_client": false, 00:15:24.379 "zerocopy_threshold": 0, 00:15:24.379 "tls_version": 0, 00:15:24.379 "enable_ktls": false 00:15:24.379 } 00:15:24.379 }, 00:15:24.379 { 00:15:24.379 "method": "sock_impl_set_options", 00:15:24.379 "params": { 00:15:24.380 "impl_name": "uring", 00:15:24.380 "recv_buf_size": 2097152, 00:15:24.380 "send_buf_size": 2097152, 00:15:24.380 "enable_recv_pipe": true, 00:15:24.380 "enable_quickack": false, 00:15:24.380 "enable_placement_id": 0, 00:15:24.380 "enable_zerocopy_send_server": false, 00:15:24.380 "enable_zerocopy_send_client": false, 00:15:24.380 "zerocopy_threshold": 0, 00:15:24.380 "tls_version": 0, 00:15:24.380 "enable_ktls": false 00:15:24.380 } 00:15:24.380 } 00:15:24.380 ] 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "subsystem": "vmd", 00:15:24.380 "config": [] 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "subsystem": "accel", 00:15:24.380 "config": [ 00:15:24.380 { 00:15:24.380 "method": "accel_set_options", 00:15:24.380 "params": { 00:15:24.380 "small_cache_size": 128, 00:15:24.380 "large_cache_size": 16, 00:15:24.380 "task_count": 2048, 00:15:24.380 "sequence_count": 2048, 00:15:24.380 "buf_count": 2048 00:15:24.380 } 00:15:24.380 } 00:15:24.380 ] 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "subsystem": "bdev", 00:15:24.380 "config": [ 00:15:24.380 { 00:15:24.380 "method": "bdev_set_options", 00:15:24.380 "params": { 00:15:24.380 "bdev_io_pool_size": 65535, 00:15:24.380 "bdev_io_cache_size": 256, 00:15:24.380 "bdev_auto_examine": true, 00:15:24.380 "iobuf_small_cache_size": 128, 00:15:24.380 "iobuf_large_cache_size": 16 00:15:24.380 } 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "method": "bdev_raid_set_options", 00:15:24.380 "params": { 00:15:24.380 "process_window_size_kb": 1024, 00:15:24.380 "process_max_bandwidth_mb_sec": 0 00:15:24.380 } 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "method": "bdev_iscsi_set_options", 00:15:24.380 "params": { 00:15:24.380 "timeout_sec": 30 00:15:24.380 } 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "method": "bdev_nvme_set_options", 00:15:24.380 "params": { 00:15:24.380 "action_on_timeout": "none", 00:15:24.380 "timeout_us": 0, 00:15:24.380 "timeout_admin_us": 0, 00:15:24.380 "keep_alive_timeout_ms": 10000, 00:15:24.380 "arbitration_burst": 0, 00:15:24.380 "low_priority_weight": 0, 00:15:24.380 "medium_priority_weight": 0, 00:15:24.380 "high_priority_weight": 0, 00:15:24.380 "nvme_adminq_poll_period_us": 10000, 00:15:24.380 "nvme_ioq_poll_period_us": 0, 00:15:24.380 "io_queue_requests": 512, 00:15:24.380 "delay_cmd_submit": true, 00:15:24.380 "transport_retry_count": 4, 00:15:24.380 "bdev_retry_count": 3, 00:15:24.380 "transport_ack_timeout": 0, 00:15:24.380 "ctrlr_loss_timeout_sec": 0, 00:15:24.380 "reconnect_delay_sec": 0, 00:15:24.380 "fast_io_fail_timeout_sec": 0, 00:15:24.380 "disable_auto_failback": false, 00:15:24.380 "generate_uuids": false, 00:15:24.380 "transport_tos": 0, 00:15:24.380 "nvme_error_stat": false, 00:15:24.380 "rdma_srq_size": 0, 00:15:24.380 "io_path_stat": false, 00:15:24.380 "allow_accel_sequence": false, 00:15:24.380 "rdma_max_cq_size": 0, 00:15:24.380 "rdma_cm_event_timeout_ms": 0, 00:15:24.380 "dhchap_digests": [ 00:15:24.380 "sha256", 00:15:24.380 "sha384", 00:15:24.380 "sha512" 00:15:24.380 ], 00:15:24.380 "dhchap_dhgroups": [ 00:15:24.380 "null", 00:15:24.380 "ffdhe2048", 00:15:24.380 "ffdhe3072", 00:15:24.380 "ffdhe4096", 00:15:24.380 "ffdhe6144", 00:15:24.380 "ffdhe8192" 00:15:24.380 ] 00:15:24.380 } 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "method": "bdev_nvme_attach_controller", 00:15:24.380 "params": { 00:15:24.380 "name": "nvme0", 00:15:24.380 "trtype": "TCP", 00:15:24.380 "adrfam": "IPv4", 00:15:24.380 "traddr": "10.0.0.2", 00:15:24.380 "trsvcid": "4420", 00:15:24.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.380 "prchk_reftag": false, 00:15:24.380 "prchk_guard": false, 00:15:24.380 "ctrlr_loss_timeout_sec": 0, 00:15:24.380 "reconnect_delay_sec": 0, 00:15:24.380 "fast_io_fail_timeout_sec": 0, 00:15:24.380 "psk": "key0", 00:15:24.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.380 "hdgst": false, 00:15:24.380 "ddgst": false 00:15:24.380 } 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "method": "bdev_nvme_set_hotplug", 00:15:24.380 "params": { 00:15:24.380 "period_us": 100000, 00:15:24.380 "enable": false 00:15:24.380 } 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "method": "bdev_enable_histogram", 00:15:24.380 "params": { 00:15:24.380 "name": "nvme0n1", 00:15:24.380 "enable": true 00:15:24.380 } 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "method": "bdev_wait_for_examine" 00:15:24.380 } 00:15:24.380 ] 00:15:24.380 }, 00:15:24.380 { 00:15:24.380 "subsystem": "nbd", 00:15:24.380 "config": [] 00:15:24.380 } 00:15:24.380 ] 00:15:24.380 }' 00:15:24.380 [2024-07-23 04:11:17.644039] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:24.380 [2024-07-23 04:11:17.644134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87511 ] 00:15:24.639 [2024-07-23 04:11:17.766508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:24.639 [2024-07-23 04:11:17.785694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.639 [2024-07-23 04:11:17.864998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.898 [2024-07-23 04:11:17.998624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:24.898 [2024-07-23 04:11:18.039869] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.465 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.465 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:25.465 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:25.465 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:15:25.723 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.723 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:25.723 Running I/O for 1 seconds... 00:15:26.660 00:15:26.660 Latency(us) 00:15:26.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.660 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.660 Verification LBA range: start 0x0 length 0x2000 00:15:26.660 nvme0n1 : 1.02 4916.09 19.20 0.00 0.00 25794.62 9770.82 20375.74 00:15:26.660 =================================================================================================================== 00:15:26.660 Total : 4916.09 19.20 0.00 0.00 25794.62 9770.82 20375.74 00:15:26.660 0 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:26.660 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:26.660 nvmf_trace.0 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 87511 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87511 ']' 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87511 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87511 00:15:26.919 killing process with pid 87511 00:15:26.919 Received shutdown signal, test time was about 1.000000 seconds 00:15:26.919 00:15:26.919 Latency(us) 00:15:26.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.919 =================================================================================================================== 00:15:26.919 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87511' 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87511 00:15:26.919 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87511 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.178 rmmod nvme_tcp 00:15:27.178 rmmod nvme_fabrics 00:15:27.178 rmmod nvme_keyring 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 87479 ']' 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 87479 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87479 ']' 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87479 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87479 00:15:27.178 killing process with pid 87479 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87479' 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87479 00:15:27.178 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87479 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aXLA6hudFP /tmp/tmp.OghtzQ7Fsg /tmp/tmp.GOCmR5ZfC5 00:15:27.437 00:15:27.437 real 1m19.579s 00:15:27.437 user 2m3.846s 00:15:27.437 sys 0m27.481s 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 ************************************ 00:15:27.437 END TEST nvmf_tls 00:15:27.437 ************************************ 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 ************************************ 00:15:27.437 START TEST nvmf_fips 00:15:27.437 ************************************ 00:15:27.437 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:27.697 * Looking for test storage... 00:15:27.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:27.697 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:27.698 Error setting digest 00:15:27.698 00721531D77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:27.698 00721531D77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.698 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:27.698 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:27.699 Cannot find device "nvmf_tgt_br" 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:27.699 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.958 Cannot find device "nvmf_tgt_br2" 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:27.958 Cannot find device "nvmf_tgt_br" 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:27.958 Cannot find device "nvmf_tgt_br2" 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.958 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:28.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:15:28.218 00:15:28.218 --- 10.0.0.2 ping statistics --- 00:15:28.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.218 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:28.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:28.218 00:15:28.218 --- 10.0.0.3 ping statistics --- 00:15:28.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.218 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:28.218 00:15:28.218 --- 10.0.0.1 ping statistics --- 00:15:28.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.218 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=87771 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 87771 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 87771 ']' 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.218 04:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:28.218 [2024-07-23 04:11:21.408052] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:28.218 [2024-07-23 04:11:21.408136] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.218 [2024-07-23 04:11:21.525149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:28.218 [2024-07-23 04:11:21.539744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.477 [2024-07-23 04:11:21.596577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.477 [2024-07-23 04:11:21.596637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.477 [2024-07-23 04:11:21.596664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.477 [2024-07-23 04:11:21.596671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.477 [2024-07-23 04:11:21.596677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.477 [2024-07-23 04:11:21.596704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.477 [2024-07-23 04:11:21.649673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:29.062 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.062 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:29.062 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.062 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.062 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:29.328 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.328 [2024-07-23 04:11:22.642189] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.328 [2024-07-23 04:11:22.658149] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.328 [2024-07-23 04:11:22.658300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.587 [2024-07-23 04:11:22.688595] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:29.587 malloc0 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=87811 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 87811 /var/tmp/bdevperf.sock 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 87811 ']' 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.587 04:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:29.587 [2024-07-23 04:11:22.797666] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:29.587 [2024-07-23 04:11:22.798026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87811 ] 00:15:29.587 [2024-07-23 04:11:22.921202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:29.846 [2024-07-23 04:11:22.940022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.846 [2024-07-23 04:11:23.005436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.846 [2024-07-23 04:11:23.062171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:30.412 04:11:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.412 04:11:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:30.412 04:11:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:30.671 [2024-07-23 04:11:23.923641] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:30.671 [2024-07-23 04:11:23.923756] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:30.671 TLSTESTn1 00:15:30.671 04:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.929 Running I/O for 10 seconds... 00:15:40.906 00:15:40.906 Latency(us) 00:15:40.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.906 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:40.906 Verification LBA range: start 0x0 length 0x2000 00:15:40.906 TLSTESTn1 : 10.02 4637.23 18.11 0.00 0.00 27552.41 6315.29 20494.89 00:15:40.906 =================================================================================================================== 00:15:40.906 Total : 4637.23 18.11 0.00 0.00 27552.41 6315.29 20494.89 00:15:40.906 0 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:40.906 nvmf_trace.0 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 87811 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 87811 ']' 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 87811 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:40.906 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87811 00:15:41.165 killing process with pid 87811 00:15:41.165 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.165 00:15:41.165 Latency(us) 00:15:41.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.165 =================================================================================================================== 00:15:41.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.165 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:41.165 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:41.165 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87811' 00:15:41.165 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 87811 00:15:41.165 [2024-07-23 04:11:34.262550] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:41.165 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 87811 00:15:41.165 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:41.165 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.165 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.424 rmmod nvme_tcp 00:15:41.424 rmmod nvme_fabrics 00:15:41.424 rmmod nvme_keyring 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 87771 ']' 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 87771 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 87771 ']' 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 87771 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87771 00:15:41.424 killing process with pid 87771 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87771' 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 87771 00:15:41.424 [2024-07-23 04:11:34.605744] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:41.424 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 87771 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:41.684 ************************************ 00:15:41.684 END TEST nvmf_fips 00:15:41.684 ************************************ 00:15:41.684 00:15:41.684 real 0m14.129s 00:15:41.684 user 0m18.794s 00:15:41.684 sys 0m5.969s 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.684 ************************************ 00:15:41.684 START TEST nvmf_fuzz 00:15:41.684 ************************************ 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:41.684 * Looking for test storage... 00:15:41.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.684 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.685 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.685 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:41.944 Cannot find device "nvmf_tgt_br" 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.944 Cannot find device "nvmf_tgt_br2" 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:41.944 Cannot find device "nvmf_tgt_br" 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:41.944 Cannot find device "nvmf_tgt_br2" 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.944 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:42.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:42.203 00:15:42.203 --- 10.0.0.2 ping statistics --- 00:15:42.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.203 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:42.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:42.203 00:15:42.203 --- 10.0.0.3 ping statistics --- 00:15:42.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.203 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:42.203 00:15:42.203 --- 10.0.0.1 ping statistics --- 00:15:42.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.203 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:42.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=88136 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 88136 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 88136 ']' 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.203 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 Malloc0 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.139 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:15:43.139 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:15:43.704 Shutting down the fuzz application 00:15:43.704 04:11:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:44.067 Shutting down the fuzz application 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.067 rmmod nvme_tcp 00:15:44.067 rmmod nvme_fabrics 00:15:44.067 rmmod nvme_keyring 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 88136 ']' 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 88136 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 88136 ']' 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 88136 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88136 00:15:44.067 killing process with pid 88136 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88136' 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 88136 00:15:44.067 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 88136 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:44.326 00:15:44.326 real 0m2.624s 00:15:44.326 user 0m2.699s 00:15:44.326 sys 0m0.675s 00:15:44.326 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:44.326 ************************************ 00:15:44.326 END TEST nvmf_fuzz 00:15:44.326 ************************************ 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.327 ************************************ 00:15:44.327 START TEST nvmf_multiconnection 00:15:44.327 ************************************ 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:44.327 * Looking for test storage... 00:15:44.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.327 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:44.586 Cannot find device "nvmf_tgt_br" 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:44.586 Cannot find device "nvmf_tgt_br2" 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:44.586 Cannot find device "nvmf_tgt_br" 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:44.586 Cannot find device "nvmf_tgt_br2" 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:44.586 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.587 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.587 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.587 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:44.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:15:44.845 00:15:44.845 --- 10.0.0.2 ping statistics --- 00:15:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.845 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:44.845 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.845 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:44.845 00:15:44.845 --- 10.0.0.3 ping statistics --- 00:15:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.845 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:44.845 00:15:44.845 --- 10.0.0.1 ping statistics --- 00:15:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.845 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.845 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:44.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=88330 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 88330 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 88330 ']' 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.845 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.846 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.846 04:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:44.846 [2024-07-23 04:11:38.078742] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:15:44.846 [2024-07-23 04:11:38.078827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.104 [2024-07-23 04:11:38.202262] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:45.104 [2024-07-23 04:11:38.219578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.104 [2024-07-23 04:11:38.277736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.104 [2024-07-23 04:11:38.278098] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.104 [2024-07-23 04:11:38.278234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.104 [2024-07-23 04:11:38.278285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.104 [2024-07-23 04:11:38.278416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.104 [2024-07-23 04:11:38.278589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.104 [2024-07-23 04:11:38.279163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.104 [2024-07-23 04:11:38.279255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.104 [2024-07-23 04:11:38.279261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.104 [2024-07-23 04:11:38.332844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.039 [2024-07-23 04:11:39.102491] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.039 Malloc1 00:15:46.039 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 [2024-07-23 04:11:39.179875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 Malloc2 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 Malloc3 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 Malloc4 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 Malloc5 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.040 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 Malloc6 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 Malloc7 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 Malloc8 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 Malloc9 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.299 Malloc10 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:46.299 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.300 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.558 Malloc11 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:46.558 04:11:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:49.090 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:49.091 04:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:50.993 04:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:50.993 04:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:50.993 04:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:15:50.993 04:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:50.994 04:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.994 04:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:50.994 04:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:50.994 04:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:15:50.994 04:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:15:50.994 04:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:50.994 04:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:50.994 04:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:50.994 04:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:52.896 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:52.896 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:52.896 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:15:52.896 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:52.896 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:52.896 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:52.896 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:52.896 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:15:53.155 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:15:53.155 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:53.155 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.155 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:53.155 04:11:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:55.058 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:55.058 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:55.058 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:15:55.058 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:55.058 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.059 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:55.059 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.059 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:15:55.317 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:15:55.317 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:55.317 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.317 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:55.317 04:11:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:57.237 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:57.237 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:57.237 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:15:57.237 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:57.237 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.237 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:57.237 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.237 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:15:57.525 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:15:57.525 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:57.525 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.525 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:57.525 04:11:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:15:59.429 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:15:59.430 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.430 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.430 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:59.430 04:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:01.962 04:11:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:03.866 04:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:03.866 04:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:03.866 04:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:16:03.866 04:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:03.866 04:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.866 04:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:03.866 04:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:03.866 04:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:16:03.866 04:11:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:03.866 04:11:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:03.866 04:11:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.866 04:11:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:03.866 04:11:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:05.767 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:05.767 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:05.767 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:16:05.767 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:05.767 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.767 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:05.767 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.767 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:16:06.025 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:06.025 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:06.025 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.025 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:06.025 04:11:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:07.926 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:07.926 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:07.926 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:16:07.926 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:07.926 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.926 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:07.926 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:07.926 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:16:08.185 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:08.185 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.185 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.185 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:08.185 04:12:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:10.087 04:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:10.087 04:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:10.087 04:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:16:10.345 04:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:10.345 04:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.345 04:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:10.345 04:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:10.345 [global] 00:16:10.345 thread=1 00:16:10.345 invalidate=1 00:16:10.345 rw=read 00:16:10.345 time_based=1 00:16:10.345 runtime=10 00:16:10.345 ioengine=libaio 00:16:10.345 direct=1 00:16:10.345 bs=262144 00:16:10.345 iodepth=64 00:16:10.345 norandommap=1 00:16:10.345 numjobs=1 00:16:10.345 00:16:10.345 [job0] 00:16:10.345 filename=/dev/nvme0n1 00:16:10.345 [job1] 00:16:10.345 filename=/dev/nvme10n1 00:16:10.345 [job2] 00:16:10.345 filename=/dev/nvme1n1 00:16:10.345 [job3] 00:16:10.345 filename=/dev/nvme2n1 00:16:10.345 [job4] 00:16:10.345 filename=/dev/nvme3n1 00:16:10.345 [job5] 00:16:10.345 filename=/dev/nvme4n1 00:16:10.345 [job6] 00:16:10.345 filename=/dev/nvme5n1 00:16:10.345 [job7] 00:16:10.345 filename=/dev/nvme6n1 00:16:10.345 [job8] 00:16:10.345 filename=/dev/nvme7n1 00:16:10.345 [job9] 00:16:10.345 filename=/dev/nvme8n1 00:16:10.345 [job10] 00:16:10.345 filename=/dev/nvme9n1 00:16:10.345 Could not set queue depth (nvme0n1) 00:16:10.345 Could not set queue depth (nvme10n1) 00:16:10.345 Could not set queue depth (nvme1n1) 00:16:10.345 Could not set queue depth (nvme2n1) 00:16:10.345 Could not set queue depth (nvme3n1) 00:16:10.345 Could not set queue depth (nvme4n1) 00:16:10.345 Could not set queue depth (nvme5n1) 00:16:10.345 Could not set queue depth (nvme6n1) 00:16:10.345 Could not set queue depth (nvme7n1) 00:16:10.345 Could not set queue depth (nvme8n1) 00:16:10.345 Could not set queue depth (nvme9n1) 00:16:10.604 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.604 fio-3.35 00:16:10.604 Starting 11 threads 00:16:22.809 00:16:22.809 job0: (groupid=0, jobs=1): err= 0: pid=88785: Tue Jul 23 04:12:14 2024 00:16:22.809 read: IOPS=1054, BW=264MiB/s (276MB/s)(2639MiB/10014msec) 00:16:22.809 slat (usec): min=15, max=29613, avg=944.21, stdev=2344.32 00:16:22.809 clat (msec): min=11, max=114, avg=59.69, stdev=12.81 00:16:22.809 lat (msec): min=14, max=121, avg=60.63, stdev=12.95 00:16:22.809 clat percentiles (msec): 00:16:22.809 | 1.00th=[ 31], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 55], 00:16:22.809 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:16:22.809 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 80], 95.00th=[ 92], 00:16:22.810 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 112], 99.95th=[ 113], 00:16:22.810 | 99.99th=[ 114] 00:16:22.810 bw ( KiB/s): min=165376, max=349184, per=15.69%, avg=268618.10, stdev=45481.59, samples=20 00:16:22.810 iops : min= 646, max= 1364, avg=1049.25, stdev=177.65, samples=20 00:16:22.810 lat (msec) : 20=0.08%, 50=6.82%, 100=91.47%, 250=1.63% 00:16:22.810 cpu : usr=0.34%, sys=2.51%, ctx=2220, majf=0, minf=4097 00:16:22.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:22.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.810 issued rwts: total=10557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.810 job1: (groupid=0, jobs=1): err= 0: pid=88786: Tue Jul 23 04:12:14 2024 00:16:22.810 read: IOPS=387, BW=96.9MiB/s (102MB/s)(984MiB/10149msec) 00:16:22.810 slat (usec): min=21, max=86420, avg=2525.16, stdev=7377.59 00:16:22.810 clat (msec): min=5, max=360, avg=162.36, stdev=70.22 00:16:22.810 lat (msec): min=5, max=368, avg=164.88, stdev=71.50 00:16:22.810 clat percentiles (msec): 00:16:22.810 | 1.00th=[ 21], 5.00th=[ 54], 10.00th=[ 62], 20.00th=[ 69], 00:16:22.810 | 30.00th=[ 92], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 207], 00:16:22.810 | 70.00th=[ 209], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 226], 00:16:22.810 | 99.00th=[ 275], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 359], 00:16:22.810 | 99.99th=[ 359] 00:16:22.810 bw ( KiB/s): min=68608, max=258560, per=5.79%, avg=99079.00, stdev=55319.24, samples=20 00:16:22.810 iops : min= 268, max= 1010, avg=386.90, stdev=216.05, samples=20 00:16:22.810 lat (msec) : 10=0.23%, 20=0.69%, 50=2.67%, 100=27.81%, 250=66.88% 00:16:22.810 lat (msec) : 500=1.73% 00:16:22.810 cpu : usr=0.20%, sys=1.66%, ctx=882, majf=0, minf=4097 00:16:22.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:22.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.810 issued rwts: total=3934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.810 job2: (groupid=0, jobs=1): err= 0: pid=88787: Tue Jul 23 04:12:14 2024 00:16:22.810 read: IOPS=341, BW=85.3MiB/s (89.5MB/s)(866MiB/10152msec) 00:16:22.810 slat (usec): min=15, max=68085, avg=2871.05, stdev=7571.19 00:16:22.810 clat (msec): min=40, max=350, avg=184.40, stdev=48.08 00:16:22.810 lat (msec): min=40, max=377, avg=187.27, stdev=49.07 00:16:22.810 clat percentiles (msec): 00:16:22.810 | 1.00th=[ 75], 5.00th=[ 93], 10.00th=[ 107], 20.00th=[ 122], 00:16:22.810 | 30.00th=[ 197], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:16:22.810 | 70.00th=[ 211], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 228], 00:16:22.810 | 99.00th=[ 271], 99.50th=[ 317], 99.90th=[ 351], 99.95th=[ 351], 00:16:22.810 | 99.99th=[ 351] 00:16:22.810 bw ( KiB/s): min=71680, max=160256, per=5.08%, avg=87052.05, stdev=25104.87, samples=20 00:16:22.810 iops : min= 280, max= 626, avg=339.90, stdev=98.01, samples=20 00:16:22.810 lat (msec) : 50=0.75%, 100=6.75%, 250=90.68%, 500=1.82% 00:16:22.810 cpu : usr=0.10%, sys=1.03%, ctx=902, majf=0, minf=4097 00:16:22.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:22.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.810 issued rwts: total=3465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.810 job3: (groupid=0, jobs=1): err= 0: pid=88788: Tue Jul 23 04:12:14 2024 00:16:22.810 read: IOPS=349, BW=87.4MiB/s (91.7MB/s)(888MiB/10153msec) 00:16:22.810 slat (usec): min=22, max=129292, avg=2822.57, stdev=8252.23 00:16:22.810 clat (msec): min=19, max=353, avg=179.98, stdev=52.56 00:16:22.810 lat (msec): min=20, max=353, avg=182.80, stdev=53.68 00:16:22.810 clat percentiles (msec): 00:16:22.810 | 1.00th=[ 43], 5.00th=[ 82], 10.00th=[ 96], 20.00th=[ 120], 00:16:22.810 | 30.00th=[ 197], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:16:22.810 | 70.00th=[ 211], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 224], 00:16:22.810 | 99.00th=[ 257], 99.50th=[ 292], 99.90th=[ 355], 99.95th=[ 355], 00:16:22.810 | 99.99th=[ 355] 00:16:22.810 bw ( KiB/s): min=70003, max=167936, per=5.21%, avg=89226.35, stdev=27456.79, samples=20 00:16:22.810 iops : min= 273, max= 656, avg=348.45, stdev=107.29, samples=20 00:16:22.810 lat (msec) : 20=0.03%, 50=1.44%, 100=9.89%, 250=87.27%, 500=1.38% 00:16:22.810 cpu : usr=0.22%, sys=1.63%, ctx=850, majf=0, minf=4097 00:16:22.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:22.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.810 issued rwts: total=3550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.810 job4: (groupid=0, jobs=1): err= 0: pid=88789: Tue Jul 23 04:12:14 2024 00:16:22.810 read: IOPS=1090, BW=273MiB/s (286MB/s)(2732MiB/10016msec) 00:16:22.810 slat (usec): min=20, max=46804, avg=890.02, stdev=2191.81 00:16:22.810 clat (msec): min=2, max=137, avg=57.67, stdev= 9.98 00:16:22.810 lat (msec): min=2, max=150, avg=58.56, stdev=10.01 00:16:22.810 clat percentiles (msec): 00:16:22.810 | 1.00th=[ 31], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 55], 00:16:22.810 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:16:22.810 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 64], 95.00th=[ 70], 00:16:22.810 | 99.00th=[ 95], 99.50th=[ 108], 99.90th=[ 134], 99.95th=[ 138], 00:16:22.810 | 99.99th=[ 138] 00:16:22.810 bw ( KiB/s): min=221114, max=300544, per=16.23%, avg=278004.40, stdev=16456.33, samples=20 00:16:22.810 iops : min= 863, max= 1174, avg=1085.80, stdev=64.37, samples=20 00:16:22.810 lat (msec) : 4=0.06%, 10=0.21%, 20=0.22%, 50=6.46%, 100=92.36% 00:16:22.810 lat (msec) : 250=0.69% 00:16:22.810 cpu : usr=0.55%, sys=4.09%, ctx=2147, majf=0, minf=4097 00:16:22.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:22.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.810 issued rwts: total=10926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.810 job5: (groupid=0, jobs=1): err= 0: pid=88790: Tue Jul 23 04:12:14 2024 00:16:22.810 read: IOPS=367, BW=92.0MiB/s (96.4MB/s)(934MiB/10159msec) 00:16:22.810 slat (usec): min=23, max=63693, avg=2684.35, stdev=6950.25 00:16:22.810 clat (msec): min=20, max=374, avg=171.02, stdev=61.33 00:16:22.810 lat (msec): min=20, max=374, avg=173.70, stdev=62.43 00:16:22.810 clat percentiles (msec): 00:16:22.810 | 1.00th=[ 57], 5.00th=[ 66], 10.00th=[ 72], 20.00th=[ 86], 00:16:22.810 | 30.00th=[ 136], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 207], 00:16:22.810 | 70.00th=[ 209], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 226], 00:16:22.810 | 99.00th=[ 257], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 376], 00:16:22.810 | 99.99th=[ 376] 00:16:22.810 bw ( KiB/s): min=72704, max=224319, per=5.49%, avg=93972.25, stdev=42988.01, samples=20 00:16:22.810 iops : min= 284, max= 876, avg=367.00, stdev=167.84, samples=20 00:16:22.810 lat (msec) : 50=0.03%, 100=24.48%, 250=74.28%, 500=1.20% 00:16:22.810 cpu : usr=0.25%, sys=1.77%, ctx=843, majf=0, minf=4097 00:16:22.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:22.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.810 issued rwts: total=3737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.810 job6: (groupid=0, jobs=1): err= 0: pid=88791: Tue Jul 23 04:12:14 2024 00:16:22.810 read: IOPS=344, BW=86.0MiB/s (90.2MB/s)(874MiB/10161msec) 00:16:22.810 slat (usec): min=15, max=99254, avg=2823.14, stdev=7650.90 00:16:22.810 clat (msec): min=27, max=356, avg=182.81, stdev=50.01 00:16:22.810 lat (msec): min=28, max=356, avg=185.63, stdev=51.10 00:16:22.810 clat percentiles (msec): 00:16:22.810 | 1.00th=[ 67], 5.00th=[ 85], 10.00th=[ 97], 20.00th=[ 123], 00:16:22.810 | 30.00th=[ 197], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 209], 00:16:22.810 | 70.00th=[ 211], 80.00th=[ 215], 90.00th=[ 222], 95.00th=[ 226], 00:16:22.810 | 99.00th=[ 264], 99.50th=[ 305], 99.90th=[ 355], 99.95th=[ 355], 00:16:22.810 | 99.99th=[ 355] 00:16:22.810 bw ( KiB/s): min=69493, max=170496, per=5.13%, avg=87815.95, stdev=27468.92, samples=20 00:16:22.810 iops : min= 271, max= 666, avg=342.90, stdev=107.14, samples=20 00:16:22.810 lat (msec) : 50=0.43%, 100=10.44%, 250=87.81%, 500=1.32% 00:16:22.810 cpu : usr=0.15%, sys=0.97%, ctx=966, majf=0, minf=4097 00:16:22.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:22.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.810 issued rwts: total=3496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.810 job7: (groupid=0, jobs=1): err= 0: pid=88792: Tue Jul 23 04:12:14 2024 00:16:22.810 read: IOPS=361, BW=90.4MiB/s (94.8MB/s)(919MiB/10163msec) 00:16:22.810 slat (usec): min=21, max=133376, avg=2687.29, stdev=9306.52 00:16:22.810 clat (msec): min=20, max=378, avg=173.89, stdev=59.70 00:16:22.810 lat (msec): min=21, max=378, avg=176.58, stdev=61.13 00:16:22.810 clat percentiles (msec): 00:16:22.810 | 1.00th=[ 46], 5.00th=[ 80], 10.00th=[ 87], 20.00th=[ 93], 00:16:22.810 | 30.00th=[ 121], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 209], 00:16:22.810 | 70.00th=[ 211], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 226], 00:16:22.810 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 372], 99.95th=[ 372], 00:16:22.810 | 99.99th=[ 380] 00:16:22.810 bw ( KiB/s): min=64000, max=177820, per=5.40%, avg=92442.65, stdev=35772.21, samples=20 00:16:22.810 iops : min= 250, max= 694, avg=361.00, stdev=139.60, samples=20 00:16:22.810 lat (msec) : 50=1.61%, 100=24.54%, 250=72.50%, 500=1.36% 00:16:22.810 cpu : usr=0.26%, sys=1.52%, ctx=856, majf=0, minf=4097 00:16:22.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:22.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.811 issued rwts: total=3676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.811 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.811 job8: (groupid=0, jobs=1): err= 0: pid=88793: Tue Jul 23 04:12:14 2024 00:16:22.811 read: IOPS=1725, BW=431MiB/s (452MB/s)(4318MiB/10011msec) 00:16:22.811 slat (usec): min=14, max=35385, avg=572.79, stdev=1444.32 00:16:22.811 clat (msec): min=6, max=114, avg=36.46, stdev=15.12 00:16:22.811 lat (msec): min=6, max=114, avg=37.03, stdev=15.34 00:16:22.811 clat percentiles (msec): 00:16:22.811 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:16:22.811 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:16:22.811 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 86], 00:16:22.811 | 99.00th=[ 103], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 110], 00:16:22.811 | 99.99th=[ 115] 00:16:22.811 bw ( KiB/s): min=159551, max=509440, per=25.72%, avg=440515.90, stdev=123256.44, samples=20 00:16:22.811 iops : min= 623, max= 1990, avg=1720.75, stdev=481.50, samples=20 00:16:22.811 lat (msec) : 10=0.09%, 20=0.32%, 50=91.99%, 100=6.32%, 250=1.29% 00:16:22.811 cpu : usr=0.44%, sys=3.94%, ctx=3461, majf=0, minf=4097 00:16:22.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:22.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.811 issued rwts: total=17272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.811 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.811 job9: (groupid=0, jobs=1): err= 0: pid=88794: Tue Jul 23 04:12:14 2024 00:16:22.811 read: IOPS=361, BW=90.4MiB/s (94.8MB/s)(918MiB/10148msec) 00:16:22.811 slat (usec): min=21, max=111313, avg=2699.76, stdev=7458.24 00:16:22.811 clat (msec): min=4, max=371, avg=173.99, stdev=62.21 00:16:22.811 lat (msec): min=4, max=371, avg=176.69, stdev=63.42 00:16:22.811 clat percentiles (msec): 00:16:22.811 | 1.00th=[ 11], 5.00th=[ 25], 10.00th=[ 86], 20.00th=[ 116], 00:16:22.811 | 30.00th=[ 176], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 207], 00:16:22.811 | 70.00th=[ 211], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 226], 00:16:22.811 | 99.00th=[ 264], 99.50th=[ 305], 99.90th=[ 347], 99.95th=[ 372], 00:16:22.811 | 99.99th=[ 372] 00:16:22.811 bw ( KiB/s): min=70656, max=225218, per=5.39%, avg=92361.65, stdev=39630.28, samples=20 00:16:22.811 iops : min= 276, max= 879, avg=360.65, stdev=154.71, samples=20 00:16:22.811 lat (msec) : 10=1.04%, 20=2.94%, 50=2.89%, 100=9.40%, 250=82.40% 00:16:22.811 lat (msec) : 500=1.33% 00:16:22.811 cpu : usr=0.22%, sys=1.38%, ctx=852, majf=0, minf=4097 00:16:22.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:22.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.811 issued rwts: total=3671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.811 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.811 job10: (groupid=0, jobs=1): err= 0: pid=88795: Tue Jul 23 04:12:14 2024 00:16:22.811 read: IOPS=364, BW=91.1MiB/s (95.5MB/s)(926MiB/10158msec) 00:16:22.811 slat (usec): min=15, max=97401, avg=2696.98, stdev=7365.08 00:16:22.811 clat (msec): min=24, max=363, avg=172.65, stdev=60.77 00:16:22.811 lat (msec): min=24, max=363, avg=175.34, stdev=61.88 00:16:22.811 clat percentiles (msec): 00:16:22.811 | 1.00th=[ 51], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 92], 00:16:22.811 | 30.00th=[ 163], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 207], 00:16:22.811 | 70.00th=[ 211], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 226], 00:16:22.811 | 99.00th=[ 262], 99.50th=[ 309], 99.90th=[ 359], 99.95th=[ 363], 00:16:22.811 | 99.99th=[ 363] 00:16:22.811 bw ( KiB/s): min=70003, max=225852, per=5.44%, avg=93086.90, stdev=41919.11, samples=20 00:16:22.811 iops : min= 273, max= 882, avg=363.50, stdev=163.60, samples=20 00:16:22.811 lat (msec) : 50=0.95%, 100=22.07%, 250=75.63%, 500=1.35% 00:16:22.811 cpu : usr=0.05%, sys=1.12%, ctx=970, majf=0, minf=4097 00:16:22.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:22.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:22.811 issued rwts: total=3702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.811 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:22.811 00:16:22.811 Run status group 0 (all jobs): 00:16:22.811 READ: bw=1672MiB/s (1754MB/s), 85.3MiB/s-431MiB/s (89.5MB/s-452MB/s), io=16.6GiB (17.8GB), run=10011-10163msec 00:16:22.811 00:16:22.811 Disk stats (read/write): 00:16:22.811 nvme0n1: ios=21082/0, merge=0/0, ticks=1241074/0, in_queue=1241074, util=97.81% 00:16:22.811 nvme10n1: ios=7743/0, merge=0/0, ticks=1221337/0, in_queue=1221337, util=97.93% 00:16:22.811 nvme1n1: ios=6804/0, merge=0/0, ticks=1221053/0, in_queue=1221053, util=98.11% 00:16:22.811 nvme2n1: ios=6973/0, merge=0/0, ticks=1223153/0, in_queue=1223153, util=98.22% 00:16:22.811 nvme3n1: ios=21218/0, merge=0/0, ticks=1209132/0, in_queue=1209132, util=98.24% 00:16:22.811 nvme4n1: ios=7352/0, merge=0/0, ticks=1222947/0, in_queue=1222947, util=98.57% 00:16:22.811 nvme5n1: ios=6865/0, merge=0/0, ticks=1222097/0, in_queue=1222097, util=98.59% 00:16:22.811 nvme6n1: ios=7232/0, merge=0/0, ticks=1228476/0, in_queue=1228476, util=98.67% 00:16:22.811 nvme7n1: ios=33499/0, merge=0/0, ticks=1208697/0, in_queue=1208697, util=98.83% 00:16:22.811 nvme8n1: ios=7221/0, merge=0/0, ticks=1224828/0, in_queue=1224828, util=98.96% 00:16:22.811 nvme9n1: ios=7289/0, merge=0/0, ticks=1224813/0, in_queue=1224813, util=99.17% 00:16:22.811 04:12:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:22.811 [global] 00:16:22.811 thread=1 00:16:22.811 invalidate=1 00:16:22.811 rw=randwrite 00:16:22.811 time_based=1 00:16:22.811 runtime=10 00:16:22.811 ioengine=libaio 00:16:22.811 direct=1 00:16:22.811 bs=262144 00:16:22.811 iodepth=64 00:16:22.811 norandommap=1 00:16:22.811 numjobs=1 00:16:22.811 00:16:22.811 [job0] 00:16:22.811 filename=/dev/nvme0n1 00:16:22.811 [job1] 00:16:22.811 filename=/dev/nvme10n1 00:16:22.811 [job2] 00:16:22.811 filename=/dev/nvme1n1 00:16:22.811 [job3] 00:16:22.811 filename=/dev/nvme2n1 00:16:22.811 [job4] 00:16:22.811 filename=/dev/nvme3n1 00:16:22.811 [job5] 00:16:22.811 filename=/dev/nvme4n1 00:16:22.811 [job6] 00:16:22.811 filename=/dev/nvme5n1 00:16:22.811 [job7] 00:16:22.811 filename=/dev/nvme6n1 00:16:22.811 [job8] 00:16:22.811 filename=/dev/nvme7n1 00:16:22.811 [job9] 00:16:22.811 filename=/dev/nvme8n1 00:16:22.811 [job10] 00:16:22.811 filename=/dev/nvme9n1 00:16:22.811 Could not set queue depth (nvme0n1) 00:16:22.811 Could not set queue depth (nvme10n1) 00:16:22.811 Could not set queue depth (nvme1n1) 00:16:22.811 Could not set queue depth (nvme2n1) 00:16:22.811 Could not set queue depth (nvme3n1) 00:16:22.811 Could not set queue depth (nvme4n1) 00:16:22.811 Could not set queue depth (nvme5n1) 00:16:22.811 Could not set queue depth (nvme6n1) 00:16:22.811 Could not set queue depth (nvme7n1) 00:16:22.811 Could not set queue depth (nvme8n1) 00:16:22.811 Could not set queue depth (nvme9n1) 00:16:22.811 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.811 fio-3.35 00:16:22.811 Starting 11 threads 00:16:32.805 00:16:32.805 job0: (groupid=0, jobs=1): err= 0: pid=88999: Tue Jul 23 04:12:25 2024 00:16:32.805 write: IOPS=282, BW=70.6MiB/s (74.0MB/s)(720MiB/10202msec); 0 zone resets 00:16:32.805 slat (usec): min=20, max=37342, avg=3470.17, stdev=6024.25 00:16:32.805 clat (msec): min=20, max=420, avg=223.08, stdev=26.63 00:16:32.805 lat (msec): min=20, max=420, avg=226.55, stdev=26.34 00:16:32.805 clat percentiles (msec): 00:16:32.805 | 1.00th=[ 110], 5.00th=[ 197], 10.00th=[ 213], 20.00th=[ 215], 00:16:32.805 | 30.00th=[ 220], 40.00th=[ 228], 50.00th=[ 230], 60.00th=[ 230], 00:16:32.805 | 70.00th=[ 230], 80.00th=[ 232], 90.00th=[ 232], 95.00th=[ 234], 00:16:32.805 | 99.00th=[ 317], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 422], 00:16:32.805 | 99.99th=[ 422] 00:16:32.805 bw ( KiB/s): min=69493, max=81920, per=6.48%, avg=72086.65, stdev=2696.15, samples=20 00:16:32.805 iops : min= 271, max= 320, avg=281.50, stdev=10.57, samples=20 00:16:32.805 lat (msec) : 50=0.56%, 100=0.42%, 250=97.43%, 500=1.60% 00:16:32.805 cpu : usr=0.63%, sys=0.92%, ctx=3725, majf=0, minf=1 00:16:32.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:32.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.805 issued rwts: total=0,2880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.805 job1: (groupid=0, jobs=1): err= 0: pid=89000: Tue Jul 23 04:12:25 2024 00:16:32.805 write: IOPS=280, BW=70.0MiB/s (73.4MB/s)(714MiB/10200msec); 0 zone resets 00:16:32.805 slat (usec): min=18, max=41780, avg=3495.94, stdev=6119.36 00:16:32.805 clat (msec): min=20, max=427, avg=224.93, stdev=28.12 00:16:32.805 lat (msec): min=20, max=427, avg=228.42, stdev=27.87 00:16:32.805 clat percentiles (msec): 00:16:32.805 | 1.00th=[ 79], 5.00th=[ 213], 10.00th=[ 215], 20.00th=[ 218], 00:16:32.805 | 30.00th=[ 226], 40.00th=[ 228], 50.00th=[ 230], 60.00th=[ 230], 00:16:32.805 | 70.00th=[ 230], 80.00th=[ 232], 90.00th=[ 232], 95.00th=[ 236], 00:16:32.805 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 426], 00:16:32.805 | 99.99th=[ 426] 00:16:32.805 bw ( KiB/s): min=68096, max=73728, per=6.43%, avg=71486.40, stdev=1401.18, samples=20 00:16:32.805 iops : min= 266, max= 288, avg=279.20, stdev= 5.47, samples=20 00:16:32.805 lat (msec) : 50=0.56%, 100=0.84%, 250=96.85%, 500=1.75% 00:16:32.805 cpu : usr=0.60%, sys=0.97%, ctx=3509, majf=0, minf=1 00:16:32.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:32.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.805 issued rwts: total=0,2856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.805 job2: (groupid=0, jobs=1): err= 0: pid=89012: Tue Jul 23 04:12:25 2024 00:16:32.805 write: IOPS=1064, BW=266MiB/s (279MB/s)(2674MiB/10051msec); 0 zone resets 00:16:32.805 slat (usec): min=17, max=15444, avg=930.98, stdev=1603.57 00:16:32.805 clat (msec): min=14, max=160, avg=59.19, stdev=11.67 00:16:32.805 lat (msec): min=14, max=160, avg=60.12, stdev=11.76 00:16:32.805 clat percentiles (msec): 00:16:32.805 | 1.00th=[ 55], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 56], 00:16:32.805 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 58], 60.00th=[ 59], 00:16:32.805 | 70.00th=[ 59], 80.00th=[ 59], 90.00th=[ 60], 95.00th=[ 61], 00:16:32.805 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 161], 00:16:32.805 | 99.99th=[ 161] 00:16:32.805 bw ( KiB/s): min=118784, max=282112, per=24.46%, avg=272120.35, stdev=36183.01, samples=20 00:16:32.805 iops : min= 464, max= 1102, avg=1062.85, stdev=141.31, samples=20 00:16:32.805 lat (msec) : 20=0.04%, 50=0.19%, 100=97.93%, 250=1.84% 00:16:32.805 cpu : usr=1.58%, sys=2.17%, ctx=13397, majf=0, minf=1 00:16:32.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:32.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.805 issued rwts: total=0,10696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.805 job3: (groupid=0, jobs=1): err= 0: pid=89013: Tue Jul 23 04:12:25 2024 00:16:32.805 write: IOPS=410, BW=103MiB/s (108MB/s)(1041MiB/10145msec); 0 zone resets 00:16:32.805 slat (usec): min=20, max=20291, avg=2396.80, stdev=4128.23 00:16:32.805 clat (msec): min=8, max=297, avg=153.39, stdev=20.31 00:16:32.805 lat (msec): min=8, max=297, avg=155.79, stdev=20.20 00:16:32.805 clat percentiles (msec): 00:16:32.805 | 1.00th=[ 80], 5.00th=[ 96], 10.00th=[ 148], 20.00th=[ 150], 00:16:32.805 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 159], 60.00th=[ 159], 00:16:32.805 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 163], 00:16:32.805 | 99.00th=[ 199], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:16:32.805 | 99.99th=[ 296] 00:16:32.805 bw ( KiB/s): min=100864, max=143872, per=9.43%, avg=104944.60, stdev=9236.67, samples=20 00:16:32.805 iops : min= 394, max= 562, avg=409.90, stdev=36.09, samples=20 00:16:32.805 lat (msec) : 10=0.02%, 50=0.48%, 100=4.88%, 250=94.19%, 500=0.43% 00:16:32.805 cpu : usr=0.78%, sys=1.39%, ctx=4750, majf=0, minf=1 00:16:32.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:32.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.805 issued rwts: total=0,4164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.805 job4: (groupid=0, jobs=1): err= 0: pid=89014: Tue Jul 23 04:12:25 2024 00:16:32.805 write: IOPS=272, BW=68.1MiB/s (71.4MB/s)(694MiB/10189msec); 0 zone resets 00:16:32.805 slat (usec): min=22, max=92677, avg=3601.13, stdev=6535.02 00:16:32.805 clat (msec): min=95, max=418, avg=231.35, stdev=20.43 00:16:32.805 lat (msec): min=95, max=418, avg=234.95, stdev=19.68 00:16:32.805 clat percentiles (msec): 00:16:32.805 | 1.00th=[ 169], 5.00th=[ 215], 10.00th=[ 218], 20.00th=[ 224], 00:16:32.805 | 30.00th=[ 230], 40.00th=[ 230], 50.00th=[ 230], 60.00th=[ 232], 00:16:32.805 | 70.00th=[ 236], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 247], 00:16:32.805 | 99.00th=[ 317], 99.50th=[ 376], 99.90th=[ 405], 99.95th=[ 418], 00:16:32.805 | 99.99th=[ 418] 00:16:32.805 bw ( KiB/s): min=60416, max=71680, per=6.24%, avg=69373.05, stdev=2702.74, samples=20 00:16:32.805 iops : min= 236, max= 280, avg=270.90, stdev=10.50, samples=20 00:16:32.805 lat (msec) : 100=0.07%, 250=96.54%, 500=3.39% 00:16:32.805 cpu : usr=0.52%, sys=0.53%, ctx=3291, majf=0, minf=1 00:16:32.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:16:32.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.805 issued rwts: total=0,2774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.805 job5: (groupid=0, jobs=1): err= 0: pid=89015: Tue Jul 23 04:12:25 2024 00:16:32.805 write: IOPS=408, BW=102MiB/s (107MB/s)(1035MiB/10133msec); 0 zone resets 00:16:32.805 slat (usec): min=20, max=23749, avg=2355.65, stdev=4157.75 00:16:32.805 clat (msec): min=8, max=286, avg=154.30, stdev=19.72 00:16:32.805 lat (msec): min=9, max=286, avg=156.65, stdev=19.66 00:16:32.805 clat percentiles (msec): 00:16:32.805 | 1.00th=[ 44], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 150], 00:16:32.805 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 159], 60.00th=[ 159], 00:16:32.805 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 161], 00:16:32.805 | 99.00th=[ 188], 99.50th=[ 239], 99.90th=[ 275], 99.95th=[ 279], 00:16:32.805 | 99.99th=[ 288] 00:16:32.805 bw ( KiB/s): min=98816, max=132096, per=9.37%, avg=104278.60, stdev=6712.63, samples=20 00:16:32.805 iops : min= 386, max= 516, avg=407.30, stdev=26.23, samples=20 00:16:32.805 lat (msec) : 10=0.05%, 20=0.29%, 50=0.89%, 100=1.21%, 250=97.22% 00:16:32.805 lat (msec) : 500=0.34% 00:16:32.805 cpu : usr=0.74%, sys=0.84%, ctx=5501, majf=0, minf=1 00:16:32.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:32.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.805 issued rwts: total=0,4138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.805 job6: (groupid=0, jobs=1): err= 0: pid=89016: Tue Jul 23 04:12:25 2024 00:16:32.805 write: IOPS=411, BW=103MiB/s (108MB/s)(1044MiB/10143msec); 0 zone resets 00:16:32.805 slat (usec): min=19, max=12413, avg=2389.47, stdev=4117.04 00:16:32.805 clat (msec): min=10, max=296, avg=153.00, stdev=21.52 00:16:32.805 lat (msec): min=10, max=296, avg=155.39, stdev=21.45 00:16:32.805 clat percentiles (msec): 00:16:32.805 | 1.00th=[ 62], 5.00th=[ 96], 10.00th=[ 148], 20.00th=[ 150], 00:16:32.805 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 159], 60.00th=[ 159], 00:16:32.805 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 161], 00:16:32.806 | 99.00th=[ 197], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:16:32.806 | 99.99th=[ 296] 00:16:32.806 bw ( KiB/s): min=98816, max=151342, per=9.46%, avg=105266.50, stdev=10919.85, samples=20 00:16:32.806 iops : min= 386, max= 591, avg=411.15, stdev=42.62, samples=20 00:16:32.806 lat (msec) : 20=0.24%, 50=0.48%, 100=5.15%, 250=93.70%, 500=0.43% 00:16:32.806 cpu : usr=0.90%, sys=1.26%, ctx=4973, majf=0, minf=1 00:16:32.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.806 issued rwts: total=0,4176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.806 job7: (groupid=0, jobs=1): err= 0: pid=89017: Tue Jul 23 04:12:25 2024 00:16:32.806 write: IOPS=274, BW=68.7MiB/s (72.1MB/s)(701MiB/10201msec); 0 zone resets 00:16:32.806 slat (usec): min=21, max=84952, avg=3560.12, stdev=6443.37 00:16:32.806 clat (msec): min=4, max=427, avg=229.06, stdev=28.26 00:16:32.806 lat (msec): min=4, max=427, avg=232.62, stdev=27.92 00:16:32.806 clat percentiles (msec): 00:16:32.806 | 1.00th=[ 72], 5.00th=[ 215], 10.00th=[ 215], 20.00th=[ 220], 00:16:32.806 | 30.00th=[ 228], 40.00th=[ 230], 50.00th=[ 230], 60.00th=[ 232], 00:16:32.806 | 70.00th=[ 232], 80.00th=[ 234], 90.00th=[ 243], 95.00th=[ 253], 00:16:32.806 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 426], 00:16:32.806 | 99.99th=[ 426] 00:16:32.806 bw ( KiB/s): min=65536, max=71680, per=6.31%, avg=70174.10, stdev=1970.95, samples=20 00:16:32.806 iops : min= 256, max= 280, avg=274.05, stdev= 7.70, samples=20 00:16:32.806 lat (msec) : 10=0.04%, 50=0.57%, 100=0.71%, 250=91.16%, 500=7.52% 00:16:32.806 cpu : usr=0.71%, sys=0.88%, ctx=2605, majf=0, minf=1 00:16:32.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.806 issued rwts: total=0,2805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.806 job8: (groupid=0, jobs=1): err= 0: pid=89018: Tue Jul 23 04:12:25 2024 00:16:32.806 write: IOPS=285, BW=71.4MiB/s (74.9MB/s)(729MiB/10205msec); 0 zone resets 00:16:32.806 slat (usec): min=20, max=48148, avg=3379.35, stdev=6013.63 00:16:32.806 clat (msec): min=13, max=429, avg=220.64, stdev=32.82 00:16:32.806 lat (msec): min=13, max=429, avg=224.02, stdev=32.88 00:16:32.806 clat percentiles (msec): 00:16:32.806 | 1.00th=[ 63], 5.00th=[ 169], 10.00th=[ 207], 20.00th=[ 215], 00:16:32.806 | 30.00th=[ 218], 40.00th=[ 228], 50.00th=[ 230], 60.00th=[ 230], 00:16:32.806 | 70.00th=[ 230], 80.00th=[ 232], 90.00th=[ 232], 95.00th=[ 234], 00:16:32.806 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 430], 00:16:32.806 | 99.99th=[ 430] 00:16:32.806 bw ( KiB/s): min=68096, max=100864, per=6.56%, avg=72963.75, stdev=6688.67, samples=20 00:16:32.806 iops : min= 266, max= 394, avg=284.95, stdev=26.13, samples=20 00:16:32.806 lat (msec) : 20=0.24%, 50=0.55%, 100=0.82%, 250=96.67%, 500=1.72% 00:16:32.806 cpu : usr=0.49%, sys=0.67%, ctx=3815, majf=0, minf=1 00:16:32.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:16:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.806 issued rwts: total=0,2914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.806 job9: (groupid=0, jobs=1): err= 0: pid=89019: Tue Jul 23 04:12:25 2024 00:16:32.806 write: IOPS=405, BW=101MiB/s (106MB/s)(1028MiB/10130msec); 0 zone resets 00:16:32.806 slat (usec): min=20, max=47886, avg=2402.46, stdev=4247.41 00:16:32.806 clat (msec): min=18, max=283, avg=155.28, stdev=16.91 00:16:32.806 lat (msec): min=18, max=283, avg=157.68, stdev=16.73 00:16:32.806 clat percentiles (msec): 00:16:32.806 | 1.00th=[ 66], 5.00th=[ 148], 10.00th=[ 148], 20.00th=[ 150], 00:16:32.806 | 30.00th=[ 159], 40.00th=[ 159], 50.00th=[ 159], 60.00th=[ 159], 00:16:32.806 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 163], 00:16:32.806 | 99.00th=[ 186], 99.50th=[ 234], 99.90th=[ 275], 99.95th=[ 275], 00:16:32.806 | 99.99th=[ 284] 00:16:32.806 bw ( KiB/s): min=98816, max=117760, per=9.31%, avg=103530.85, stdev=3659.76, samples=20 00:16:32.806 iops : min= 386, max= 460, avg=404.35, stdev=14.31, samples=20 00:16:32.806 lat (msec) : 20=0.02%, 50=0.51%, 100=1.27%, 250=97.86%, 500=0.34% 00:16:32.806 cpu : usr=0.76%, sys=0.76%, ctx=5531, majf=0, minf=1 00:16:32.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.806 issued rwts: total=0,4110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.806 job10: (groupid=0, jobs=1): err= 0: pid=89020: Tue Jul 23 04:12:25 2024 00:16:32.806 write: IOPS=278, BW=69.5MiB/s (72.9MB/s)(709MiB/10197msec); 0 zone resets 00:16:32.806 slat (usec): min=18, max=45840, avg=3521.18, stdev=6205.03 00:16:32.806 clat (msec): min=49, max=423, avg=226.47, stdev=25.23 00:16:32.806 lat (msec): min=49, max=423, avg=229.99, stdev=24.84 00:16:32.806 clat percentiles (msec): 00:16:32.806 | 1.00th=[ 112], 5.00th=[ 213], 10.00th=[ 215], 20.00th=[ 218], 00:16:32.806 | 30.00th=[ 228], 40.00th=[ 228], 50.00th=[ 230], 60.00th=[ 230], 00:16:32.806 | 70.00th=[ 230], 80.00th=[ 232], 90.00th=[ 234], 95.00th=[ 243], 00:16:32.806 | 99.00th=[ 321], 99.50th=[ 363], 99.90th=[ 409], 99.95th=[ 422], 00:16:32.806 | 99.99th=[ 422] 00:16:32.806 bw ( KiB/s): min=65536, max=73728, per=6.38%, avg=70946.35, stdev=1753.71, samples=20 00:16:32.806 iops : min= 256, max= 288, avg=277.00, stdev= 6.89, samples=20 00:16:32.806 lat (msec) : 50=0.14%, 100=0.71%, 250=95.94%, 500=3.21% 00:16:32.806 cpu : usr=0.60%, sys=0.91%, ctx=3586, majf=0, minf=1 00:16:32.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:32.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:32.806 issued rwts: total=0,2836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:32.806 00:16:32.806 Run status group 0 (all jobs): 00:16:32.806 WRITE: bw=1086MiB/s (1139MB/s), 68.1MiB/s-266MiB/s (71.4MB/s-279MB/s), io=10.8GiB (11.6GB), run=10051-10205msec 00:16:32.806 00:16:32.806 Disk stats (read/write): 00:16:32.806 nvme0n1: ios=50/5615, merge=0/0, ticks=37/1205822, in_queue=1205859, util=97.65% 00:16:32.806 nvme10n1: ios=49/5572, merge=0/0, ticks=53/1206276, in_queue=1206329, util=97.95% 00:16:32.806 nvme1n1: ios=41/21224, merge=0/0, ticks=30/1215866, in_queue=1215896, util=98.05% 00:16:32.806 nvme2n1: ios=29/8175, merge=0/0, ticks=33/1209032, in_queue=1209065, util=97.93% 00:16:32.806 nvme3n1: ios=26/5400, merge=0/0, ticks=32/1204311, in_queue=1204343, util=97.88% 00:16:32.806 nvme4n1: ios=0/8109, merge=0/0, ticks=0/1208964, in_queue=1208964, util=98.05% 00:16:32.806 nvme5n1: ios=0/8201, merge=0/0, ticks=0/1209924, in_queue=1209924, util=98.35% 00:16:32.806 nvme6n1: ios=0/5472, merge=0/0, ticks=0/1206184, in_queue=1206184, util=98.45% 00:16:32.806 nvme7n1: ios=0/5692, merge=0/0, ticks=0/1208588, in_queue=1208588, util=98.80% 00:16:32.806 nvme8n1: ios=0/8050, merge=0/0, ticks=0/1208069, in_queue=1208069, util=98.56% 00:16:32.806 nvme9n1: ios=0/5529, merge=0/0, ticks=0/1205471, in_queue=1205471, util=98.84% 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:32.806 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.806 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:32.807 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:32.807 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:32.807 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:32.807 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:32.807 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:32.807 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:32.807 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.807 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:32.808 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:32.808 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.808 04:12:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.808 rmmod nvme_tcp 00:16:32.808 rmmod nvme_fabrics 00:16:32.808 rmmod nvme_keyring 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 88330 ']' 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 88330 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 88330 ']' 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 88330 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88330 00:16:32.808 killing process with pid 88330 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88330' 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 88330 00:16:32.808 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 88330 00:16:33.374 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.374 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:33.375 00:16:33.375 real 0m49.011s 00:16:33.375 user 2m40.622s 00:16:33.375 sys 0m34.309s 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:33.375 ************************************ 00:16:33.375 END TEST nvmf_multiconnection 00:16:33.375 ************************************ 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:33.375 ************************************ 00:16:33.375 START TEST nvmf_initiator_timeout 00:16:33.375 ************************************ 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:33.375 * Looking for test storage... 00:16:33.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:33.375 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.633 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:33.633 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:33.634 Cannot find device "nvmf_tgt_br" 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.634 Cannot find device "nvmf_tgt_br2" 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:33.634 Cannot find device "nvmf_tgt_br" 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:33.634 Cannot find device "nvmf_tgt_br2" 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:33.634 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:33.635 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:33.892 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.893 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:33.893 04:12:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:33.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:33.893 00:16:33.893 --- 10.0.0.2 ping statistics --- 00:16:33.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.893 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:33.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:33.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:33.893 00:16:33.893 --- 10.0.0.3 ping statistics --- 00:16:33.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.893 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:33.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:33.893 00:16:33.893 --- 10.0.0.1 ping statistics --- 00:16:33.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.893 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=89386 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 89386 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 89386 ']' 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.893 04:12:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 [2024-07-23 04:12:27.153408] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:16:33.893 [2024-07-23 04:12:27.154090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.151 [2024-07-23 04:12:27.279867] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:34.151 [2024-07-23 04:12:27.293612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.151 [2024-07-23 04:12:27.346435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.151 [2024-07-23 04:12:27.346497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.151 [2024-07-23 04:12:27.346523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.151 [2024-07-23 04:12:27.346531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.151 [2024-07-23 04:12:27.346537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.151 [2024-07-23 04:12:27.346706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.151 [2024-07-23 04:12:27.346832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.151 [2024-07-23 04:12:27.347047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.151 [2024-07-23 04:12:27.347050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.151 [2024-07-23 04:12:27.399641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:35.087 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.087 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:16:35.087 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.087 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:35.087 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.088 Malloc0 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.088 Delay0 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.088 [2024-07-23 04:12:28.158242] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:35.088 [2024-07-23 04:12:28.186412] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:35.088 04:12:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.991 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.992 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.992 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.992 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:37.389 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.389 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:16:37.389 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=89449 00:16:37.389 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:37.389 04:12:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:37.389 [global] 00:16:37.389 thread=1 00:16:37.389 invalidate=1 00:16:37.389 rw=write 00:16:37.389 time_based=1 00:16:37.389 runtime=60 00:16:37.389 ioengine=libaio 00:16:37.389 direct=1 00:16:37.389 bs=4096 00:16:37.389 iodepth=1 00:16:37.389 norandommap=0 00:16:37.389 numjobs=1 00:16:37.389 00:16:37.389 verify_dump=1 00:16:37.389 verify_backlog=512 00:16:37.389 verify_state_save=0 00:16:37.389 do_verify=1 00:16:37.389 verify=crc32c-intel 00:16:37.389 [job0] 00:16:37.389 filename=/dev/nvme0n1 00:16:37.389 Could not set queue depth (nvme0n1) 00:16:37.389 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.389 fio-3.35 00:16:37.389 Starting 1 thread 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:40.675 true 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:40.675 true 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:40.675 true 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:40.675 true 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.675 04:12:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.209 true 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.209 true 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.209 true 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.209 true 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:43.209 04:12:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 89449 00:17:39.451 00:17:39.451 job0: (groupid=0, jobs=1): err= 0: pid=89471: Tue Jul 23 04:13:30 2024 00:17:39.451 read: IOPS=776, BW=3106KiB/s (3181kB/s)(182MiB/60000msec) 00:17:39.451 slat (usec): min=10, max=10622, avg=14.86, stdev=63.81 00:17:39.451 clat (usec): min=159, max=40795k, avg=1088.68, stdev=188992.54 00:17:39.451 lat (usec): min=171, max=40795k, avg=1103.53, stdev=188992.61 00:17:39.451 clat percentiles (usec): 00:17:39.451 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:17:39.451 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 212], 00:17:39.451 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 255], 95.00th=[ 277], 00:17:39.451 | 99.00th=[ 343], 99.50th=[ 375], 99.90th=[ 594], 99.95th=[ 701], 00:17:39.451 | 99.99th=[ 1237] 00:17:39.451 write: IOPS=780, BW=3124KiB/s (3199kB/s)(183MiB/60000msec); 0 zone resets 00:17:39.451 slat (usec): min=13, max=789, avg=21.24, stdev= 7.88 00:17:39.451 clat (usec): min=115, max=1579, avg=158.50, stdev=33.06 00:17:39.451 lat (usec): min=133, max=1599, avg=179.75, stdev=34.88 00:17:39.451 clat percentiles (usec): 00:17:39.451 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 131], 20.00th=[ 137], 00:17:39.451 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 159], 00:17:39.451 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 204], 00:17:39.451 | 99.00th=[ 249], 99.50th=[ 277], 99.90th=[ 469], 99.95th=[ 586], 00:17:39.451 | 99.99th=[ 1254] 00:17:39.451 bw ( KiB/s): min= 4168, max=11712, per=100.00%, avg=9677.26, stdev=1496.11, samples=38 00:17:39.451 iops : min= 1042, max= 2928, avg=2419.32, stdev=374.03, samples=38 00:17:39.451 lat (usec) : 250=93.80%, 500=6.06%, 750=0.11%, 1000=0.01% 00:17:39.451 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:17:39.451 cpu : usr=0.57%, sys=2.14%, ctx=93456, majf=0, minf=2 00:17:39.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.451 issued rwts: total=46592,46855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.451 00:17:39.451 Run status group 0 (all jobs): 00:17:39.451 READ: bw=3106KiB/s (3181kB/s), 3106KiB/s-3106KiB/s (3181kB/s-3181kB/s), io=182MiB (191MB), run=60000-60000msec 00:17:39.451 WRITE: bw=3124KiB/s (3199kB/s), 3124KiB/s-3124KiB/s (3199kB/s-3199kB/s), io=183MiB (192MB), run=60000-60000msec 00:17:39.451 00:17:39.451 Disk stats (read/write): 00:17:39.451 nvme0n1: ios=46607/46592, merge=0/0, ticks=10588/8222, in_queue=18810, util=99.66% 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:39.451 nvmf hotplug test: fio successful as expected 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.451 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.452 rmmod nvme_tcp 00:17:39.452 rmmod nvme_fabrics 00:17:39.452 rmmod nvme_keyring 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 89386 ']' 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 89386 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 89386 ']' 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 89386 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89386 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.452 killing process with pid 89386 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89386' 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 89386 00:17:39.452 04:13:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 89386 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:39.452 00:17:39.452 real 1m4.412s 00:17:39.452 user 3m55.530s 00:17:39.452 sys 0m18.966s 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.452 ************************************ 00:17:39.452 END TEST nvmf_initiator_timeout 00:17:39.452 ************************************ 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:17:39.452 00:17:39.452 real 6m4.597s 00:17:39.452 user 15m14.541s 00:17:39.452 sys 1m52.828s 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.452 04:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.452 ************************************ 00:17:39.452 END TEST nvmf_target_extra 00:17:39.452 ************************************ 00:17:39.452 04:13:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:39.452 04:13:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:39.452 04:13:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:39.452 04:13:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.452 04:13:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.452 ************************************ 00:17:39.452 START TEST nvmf_host 00:17:39.452 ************************************ 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:39.452 * Looking for test storage... 00:17:39.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.452 ************************************ 00:17:39.452 START TEST nvmf_identify 00:17:39.452 ************************************ 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:39.452 * Looking for test storage... 00:17:39.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:39.452 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:39.453 Cannot find device "nvmf_tgt_br" 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.453 Cannot find device "nvmf_tgt_br2" 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:39.453 Cannot find device "nvmf_tgt_br" 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:39.453 Cannot find device "nvmf_tgt_br2" 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:39.453 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:39.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:17:39.454 00:17:39.454 --- 10.0.0.2 ping statistics --- 00:17:39.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.454 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:39.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:39.454 00:17:39.454 --- 10.0.0.3 ping statistics --- 00:17:39.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.454 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:39.454 00:17:39.454 --- 10.0.0.1 ping statistics --- 00:17:39.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.454 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=90337 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 90337 00:17:39.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 90337 ']' 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.454 04:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 [2024-07-23 04:13:31.749888] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:39.454 [2024-07-23 04:13:31.749994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.454 [2024-07-23 04:13:31.873293] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:39.454 [2024-07-23 04:13:31.889189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.454 [2024-07-23 04:13:31.945945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.454 [2024-07-23 04:13:31.946012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.454 [2024-07-23 04:13:31.946038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.454 [2024-07-23 04:13:31.946045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.454 [2024-07-23 04:13:31.946052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.454 [2024-07-23 04:13:31.946480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.454 [2024-07-23 04:13:31.946751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.454 [2024-07-23 04:13:31.946929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.454 [2024-07-23 04:13:31.946936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.454 [2024-07-23 04:13:32.001380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 [2024-07-23 04:13:32.066860] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 Malloc0 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 [2024-07-23 04:13:32.180164] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.454 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.454 [ 00:17:39.454 { 00:17:39.454 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:39.454 "subtype": "Discovery", 00:17:39.454 "listen_addresses": [ 00:17:39.454 { 00:17:39.454 "trtype": "TCP", 00:17:39.454 "adrfam": "IPv4", 00:17:39.454 "traddr": "10.0.0.2", 00:17:39.454 "trsvcid": "4420" 00:17:39.454 } 00:17:39.454 ], 00:17:39.454 "allow_any_host": true, 00:17:39.454 "hosts": [] 00:17:39.454 }, 00:17:39.454 { 00:17:39.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.454 "subtype": "NVMe", 00:17:39.454 "listen_addresses": [ 00:17:39.454 { 00:17:39.454 "trtype": "TCP", 00:17:39.454 "adrfam": "IPv4", 00:17:39.454 "traddr": "10.0.0.2", 00:17:39.454 "trsvcid": "4420" 00:17:39.454 } 00:17:39.454 ], 00:17:39.454 "allow_any_host": true, 00:17:39.454 "hosts": [], 00:17:39.454 "serial_number": "SPDK00000000000001", 00:17:39.454 "model_number": "SPDK bdev Controller", 00:17:39.454 "max_namespaces": 32, 00:17:39.454 "min_cntlid": 1, 00:17:39.454 "max_cntlid": 65519, 00:17:39.454 "namespaces": [ 00:17:39.455 { 00:17:39.455 "nsid": 1, 00:17:39.455 "bdev_name": "Malloc0", 00:17:39.455 "name": "Malloc0", 00:17:39.455 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:39.455 "eui64": "ABCDEF0123456789", 00:17:39.455 "uuid": "a5418373-d96c-4738-bcf5-83975bb60645" 00:17:39.455 } 00:17:39.455 ] 00:17:39.455 } 00:17:39.455 ] 00:17:39.455 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.455 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:39.455 [2024-07-23 04:13:32.241423] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:39.455 [2024-07-23 04:13:32.241481] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90371 ] 00:17:39.455 [2024-07-23 04:13:32.364638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:39.455 [2024-07-23 04:13:32.381921] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:39.455 [2024-07-23 04:13:32.382002] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:39.455 [2024-07-23 04:13:32.382009] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:39.455 [2024-07-23 04:13:32.382019] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:39.455 [2024-07-23 04:13:32.382027] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:39.455 [2024-07-23 04:13:32.382138] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:39.455 [2024-07-23 04:13:32.382218] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd49a90 0 00:17:39.455 [2024-07-23 04:13:32.395972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:39.455 [2024-07-23 04:13:32.396006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:39.455 [2024-07-23 04:13:32.396013] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:39.455 [2024-07-23 04:13:32.396016] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:39.455 [2024-07-23 04:13:32.396061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.396068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.396072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.455 [2024-07-23 04:13:32.396084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:39.455 [2024-07-23 04:13:32.396113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.455 [2024-07-23 04:13:32.403944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.455 [2024-07-23 04:13:32.403966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.455 [2024-07-23 04:13:32.403987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.403992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.455 [2024-07-23 04:13:32.404005] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:39.455 [2024-07-23 04:13:32.404012] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:39.455 [2024-07-23 04:13:32.404017] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:39.455 [2024-07-23 04:13:32.404035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.455 [2024-07-23 04:13:32.404053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.455 [2024-07-23 04:13:32.404078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.455 [2024-07-23 04:13:32.404130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.455 [2024-07-23 04:13:32.404136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.455 [2024-07-23 04:13:32.404140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.455 [2024-07-23 04:13:32.404149] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:39.455 [2024-07-23 04:13:32.404156] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:39.455 [2024-07-23 04:13:32.404163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.455 [2024-07-23 04:13:32.404178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.455 [2024-07-23 04:13:32.404195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.455 [2024-07-23 04:13:32.404268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.455 [2024-07-23 04:13:32.404275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.455 [2024-07-23 04:13:32.404278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.455 [2024-07-23 04:13:32.404288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:39.455 [2024-07-23 04:13:32.404297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:39.455 [2024-07-23 04:13:32.404304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.455 [2024-07-23 04:13:32.404320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.455 [2024-07-23 04:13:32.404337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.455 [2024-07-23 04:13:32.404384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.455 [2024-07-23 04:13:32.404390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.455 [2024-07-23 04:13:32.404394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.455 [2024-07-23 04:13:32.404403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:39.455 [2024-07-23 04:13:32.404413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.455 [2024-07-23 04:13:32.404429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.455 [2024-07-23 04:13:32.404446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.455 [2024-07-23 04:13:32.404493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.455 [2024-07-23 04:13:32.404499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.455 [2024-07-23 04:13:32.404503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.455 [2024-07-23 04:13:32.404512] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:39.455 [2024-07-23 04:13:32.404517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:39.455 [2024-07-23 04:13:32.404524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:39.455 [2024-07-23 04:13:32.404629] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:39.455 [2024-07-23 04:13:32.404634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:39.455 [2024-07-23 04:13:32.404643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.455 [2024-07-23 04:13:32.404659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.455 [2024-07-23 04:13:32.404677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.455 [2024-07-23 04:13:32.404729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.455 [2024-07-23 04:13:32.404736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.455 [2024-07-23 04:13:32.404739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.455 [2024-07-23 04:13:32.404748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:39.455 [2024-07-23 04:13:32.404758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.455 [2024-07-23 04:13:32.404773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.455 [2024-07-23 04:13:32.404790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.455 [2024-07-23 04:13:32.404834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.455 [2024-07-23 04:13:32.404840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.455 [2024-07-23 04:13:32.404844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.455 [2024-07-23 04:13:32.404848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.456 [2024-07-23 04:13:32.404853] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:39.456 [2024-07-23 04:13:32.404858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:39.456 [2024-07-23 04:13:32.404866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:39.456 [2024-07-23 04:13:32.404876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:39.456 [2024-07-23 04:13:32.404886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.404890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.456 [2024-07-23 04:13:32.404898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.456 [2024-07-23 04:13:32.404931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.456 [2024-07-23 04:13:32.405034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.456 [2024-07-23 04:13:32.405043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.456 [2024-07-23 04:13:32.405047] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405051] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49a90): datao=0, datal=4096, cccid=0 00:17:39.456 [2024-07-23 04:13:32.405062] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd908c0) on tqpair(0xd49a90): expected_datao=0, payload_size=4096 00:17:39.456 [2024-07-23 04:13:32.405067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405075] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405079] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.456 [2024-07-23 04:13:32.405094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.456 [2024-07-23 04:13:32.405098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.456 [2024-07-23 04:13:32.405110] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:39.456 [2024-07-23 04:13:32.405116] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:39.456 [2024-07-23 04:13:32.405121] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:39.456 [2024-07-23 04:13:32.405126] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:39.456 [2024-07-23 04:13:32.405131] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:39.456 [2024-07-23 04:13:32.405136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:39.456 [2024-07-23 04:13:32.405145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:39.456 [2024-07-23 04:13:32.405153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.456 [2024-07-23 04:13:32.405169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:39.456 [2024-07-23 04:13:32.405190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.456 [2024-07-23 04:13:32.405247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.456 [2024-07-23 04:13:32.405254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.456 [2024-07-23 04:13:32.405258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.456 [2024-07-23 04:13:32.405275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49a90) 00:17:39.456 [2024-07-23 04:13:32.405291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.456 [2024-07-23 04:13:32.405297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd49a90) 00:17:39.456 [2024-07-23 04:13:32.405326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.456 [2024-07-23 04:13:32.405332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd49a90) 00:17:39.456 [2024-07-23 04:13:32.405345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.456 [2024-07-23 04:13:32.405351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.456 [2024-07-23 04:13:32.405364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.456 [2024-07-23 04:13:32.405369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:39.456 [2024-07-23 04:13:32.405377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:39.456 [2024-07-23 04:13:32.405384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405388] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49a90) 00:17:39.456 [2024-07-23 04:13:32.405395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.456 [2024-07-23 04:13:32.405415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd908c0, cid 0, qid 0 00:17:39.456 [2024-07-23 04:13:32.405422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90a40, cid 1, qid 0 00:17:39.456 [2024-07-23 04:13:32.405426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90bc0, cid 2, qid 0 00:17:39.456 [2024-07-23 04:13:32.405431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.456 [2024-07-23 04:13:32.405435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90ec0, cid 4, qid 0 00:17:39.456 [2024-07-23 04:13:32.405519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.456 [2024-07-23 04:13:32.405526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.456 [2024-07-23 04:13:32.405529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90ec0) on tqpair=0xd49a90 00:17:39.456 [2024-07-23 04:13:32.405543] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:39.456 [2024-07-23 04:13:32.405548] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:39.456 [2024-07-23 04:13:32.405559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.456 [2024-07-23 04:13:32.405564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49a90) 00:17:39.456 [2024-07-23 04:13:32.405571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.456 [2024-07-23 04:13:32.405589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90ec0, cid 4, qid 0 00:17:39.456 [2024-07-23 04:13:32.405642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.457 [2024-07-23 04:13:32.405649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.457 [2024-07-23 04:13:32.405652] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405656] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49a90): datao=0, datal=4096, cccid=4 00:17:39.457 [2024-07-23 04:13:32.405661] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd90ec0) on tqpair(0xd49a90): expected_datao=0, payload_size=4096 00:17:39.457 [2024-07-23 04:13:32.405665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405672] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405676] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.457 [2024-07-23 04:13:32.405690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.457 [2024-07-23 04:13:32.405693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90ec0) on tqpair=0xd49a90 00:17:39.457 [2024-07-23 04:13:32.405709] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:39.457 [2024-07-23 04:13:32.405733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49a90) 00:17:39.457 [2024-07-23 04:13:32.405746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.457 [2024-07-23 04:13:32.405753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd49a90) 00:17:39.457 [2024-07-23 04:13:32.405767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.457 [2024-07-23 04:13:32.405790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90ec0, cid 4, qid 0 00:17:39.457 [2024-07-23 04:13:32.405798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd91040, cid 5, qid 0 00:17:39.457 [2024-07-23 04:13:32.405894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.457 [2024-07-23 04:13:32.405901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.457 [2024-07-23 04:13:32.405904] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405921] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49a90): datao=0, datal=1024, cccid=4 00:17:39.457 [2024-07-23 04:13:32.405927] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd90ec0) on tqpair(0xd49a90): expected_datao=0, payload_size=1024 00:17:39.457 [2024-07-23 04:13:32.405931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405938] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405942] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.457 [2024-07-23 04:13:32.405953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.457 [2024-07-23 04:13:32.405957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd91040) on tqpair=0xd49a90 00:17:39.457 [2024-07-23 04:13:32.405979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.457 [2024-07-23 04:13:32.405986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.457 [2024-07-23 04:13:32.405989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.405993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90ec0) on tqpair=0xd49a90 00:17:39.457 [2024-07-23 04:13:32.406005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49a90) 00:17:39.457 [2024-07-23 04:13:32.406018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.457 [2024-07-23 04:13:32.406042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90ec0, cid 4, qid 0 00:17:39.457 [2024-07-23 04:13:32.406111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.457 [2024-07-23 04:13:32.406118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.457 [2024-07-23 04:13:32.406121] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406125] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49a90): datao=0, datal=3072, cccid=4 00:17:39.457 [2024-07-23 04:13:32.406130] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd90ec0) on tqpair(0xd49a90): expected_datao=0, payload_size=3072 00:17:39.457 [2024-07-23 04:13:32.406134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406141] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406144] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.457 [2024-07-23 04:13:32.406158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.457 [2024-07-23 04:13:32.406161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90ec0) on tqpair=0xd49a90 00:17:39.457 [2024-07-23 04:13:32.406175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49a90) 00:17:39.457 [2024-07-23 04:13:32.406186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.457 [2024-07-23 04:13:32.406208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90ec0, cid 4, qid 0 00:17:39.457 [2024-07-23 04:13:32.406275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.457 [2024-07-23 04:13:32.406282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.457 [2024-07-23 04:13:32.406286] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406290] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49a90): datao=0, datal=8, cccid=4 00:17:39.457 [2024-07-23 04:13:32.406294] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd90ec0) on tqpair(0xd49a90): expected_datao=0, payload_size=8 00:17:39.457 [2024-07-23 04:13:32.406298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406305] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406308] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.457 [2024-07-23 04:13:32.406329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.457 [2024-07-23 04:13:32.406333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.457 [2024-07-23 04:13:32.406337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90ec0) on tqpair=0xd49a90 00:17:39.457 ===================================================== 00:17:39.457 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:39.457 ===================================================== 00:17:39.457 Controller Capabilities/Features 00:17:39.457 ================================ 00:17:39.457 Vendor ID: 0000 00:17:39.457 Subsystem Vendor ID: 0000 00:17:39.457 Serial Number: .................... 00:17:39.457 Model Number: ........................................ 00:17:39.457 Firmware Version: 24.09 00:17:39.457 Recommended Arb Burst: 0 00:17:39.457 IEEE OUI Identifier: 00 00 00 00:17:39.457 Multi-path I/O 00:17:39.457 May have multiple subsystem ports: No 00:17:39.457 May have multiple controllers: No 00:17:39.457 Associated with SR-IOV VF: No 00:17:39.457 Max Data Transfer Size: 131072 00:17:39.457 Max Number of Namespaces: 0 00:17:39.457 Max Number of I/O Queues: 1024 00:17:39.457 NVMe Specification Version (VS): 1.3 00:17:39.457 NVMe Specification Version (Identify): 1.3 00:17:39.457 Maximum Queue Entries: 128 00:17:39.457 Contiguous Queues Required: Yes 00:17:39.457 Arbitration Mechanisms Supported 00:17:39.457 Weighted Round Robin: Not Supported 00:17:39.457 Vendor Specific: Not Supported 00:17:39.457 Reset Timeout: 15000 ms 00:17:39.457 Doorbell Stride: 4 bytes 00:17:39.457 NVM Subsystem Reset: Not Supported 00:17:39.457 Command Sets Supported 00:17:39.457 NVM Command Set: Supported 00:17:39.457 Boot Partition: Not Supported 00:17:39.457 Memory Page Size Minimum: 4096 bytes 00:17:39.457 Memory Page Size Maximum: 4096 bytes 00:17:39.457 Persistent Memory Region: Not Supported 00:17:39.457 Optional Asynchronous Events Supported 00:17:39.457 Namespace Attribute Notices: Not Supported 00:17:39.457 Firmware Activation Notices: Not Supported 00:17:39.457 ANA Change Notices: Not Supported 00:17:39.457 PLE Aggregate Log Change Notices: Not Supported 00:17:39.457 LBA Status Info Alert Notices: Not Supported 00:17:39.457 EGE Aggregate Log Change Notices: Not Supported 00:17:39.457 Normal NVM Subsystem Shutdown event: Not Supported 00:17:39.457 Zone Descriptor Change Notices: Not Supported 00:17:39.457 Discovery Log Change Notices: Supported 00:17:39.457 Controller Attributes 00:17:39.457 128-bit Host Identifier: Not Supported 00:17:39.457 Non-Operational Permissive Mode: Not Supported 00:17:39.457 NVM Sets: Not Supported 00:17:39.457 Read Recovery Levels: Not Supported 00:17:39.457 Endurance Groups: Not Supported 00:17:39.457 Predictable Latency Mode: Not Supported 00:17:39.457 Traffic Based Keep ALive: Not Supported 00:17:39.457 Namespace Granularity: Not Supported 00:17:39.457 SQ Associations: Not Supported 00:17:39.457 UUID List: Not Supported 00:17:39.457 Multi-Domain Subsystem: Not Supported 00:17:39.457 Fixed Capacity Management: Not Supported 00:17:39.457 Variable Capacity Management: Not Supported 00:17:39.457 Delete Endurance Group: Not Supported 00:17:39.457 Delete NVM Set: Not Supported 00:17:39.457 Extended LBA Formats Supported: Not Supported 00:17:39.457 Flexible Data Placement Supported: Not Supported 00:17:39.457 00:17:39.457 Controller Memory Buffer Support 00:17:39.458 ================================ 00:17:39.458 Supported: No 00:17:39.458 00:17:39.458 Persistent Memory Region Support 00:17:39.458 ================================ 00:17:39.458 Supported: No 00:17:39.458 00:17:39.458 Admin Command Set Attributes 00:17:39.458 ============================ 00:17:39.458 Security Send/Receive: Not Supported 00:17:39.458 Format NVM: Not Supported 00:17:39.458 Firmware Activate/Download: Not Supported 00:17:39.458 Namespace Management: Not Supported 00:17:39.458 Device Self-Test: Not Supported 00:17:39.458 Directives: Not Supported 00:17:39.458 NVMe-MI: Not Supported 00:17:39.458 Virtualization Management: Not Supported 00:17:39.458 Doorbell Buffer Config: Not Supported 00:17:39.458 Get LBA Status Capability: Not Supported 00:17:39.458 Command & Feature Lockdown Capability: Not Supported 00:17:39.458 Abort Command Limit: 1 00:17:39.458 Async Event Request Limit: 4 00:17:39.458 Number of Firmware Slots: N/A 00:17:39.458 Firmware Slot 1 Read-Only: N/A 00:17:39.458 Firmware Activation Without Reset: N/A 00:17:39.458 Multiple Update Detection Support: N/A 00:17:39.458 Firmware Update Granularity: No Information Provided 00:17:39.458 Per-Namespace SMART Log: No 00:17:39.458 Asymmetric Namespace Access Log Page: Not Supported 00:17:39.458 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:39.458 Command Effects Log Page: Not Supported 00:17:39.458 Get Log Page Extended Data: Supported 00:17:39.458 Telemetry Log Pages: Not Supported 00:17:39.458 Persistent Event Log Pages: Not Supported 00:17:39.458 Supported Log Pages Log Page: May Support 00:17:39.458 Commands Supported & Effects Log Page: Not Supported 00:17:39.458 Feature Identifiers & Effects Log Page:May Support 00:17:39.458 NVMe-MI Commands & Effects Log Page: May Support 00:17:39.458 Data Area 4 for Telemetry Log: Not Supported 00:17:39.458 Error Log Page Entries Supported: 128 00:17:39.458 Keep Alive: Not Supported 00:17:39.458 00:17:39.458 NVM Command Set Attributes 00:17:39.458 ========================== 00:17:39.458 Submission Queue Entry Size 00:17:39.458 Max: 1 00:17:39.458 Min: 1 00:17:39.458 Completion Queue Entry Size 00:17:39.458 Max: 1 00:17:39.458 Min: 1 00:17:39.458 Number of Namespaces: 0 00:17:39.458 Compare Command: Not Supported 00:17:39.458 Write Uncorrectable Command: Not Supported 00:17:39.458 Dataset Management Command: Not Supported 00:17:39.458 Write Zeroes Command: Not Supported 00:17:39.458 Set Features Save Field: Not Supported 00:17:39.458 Reservations: Not Supported 00:17:39.458 Timestamp: Not Supported 00:17:39.458 Copy: Not Supported 00:17:39.458 Volatile Write Cache: Not Present 00:17:39.458 Atomic Write Unit (Normal): 1 00:17:39.458 Atomic Write Unit (PFail): 1 00:17:39.458 Atomic Compare & Write Unit: 1 00:17:39.458 Fused Compare & Write: Supported 00:17:39.458 Scatter-Gather List 00:17:39.458 SGL Command Set: Supported 00:17:39.458 SGL Keyed: Supported 00:17:39.458 SGL Bit Bucket Descriptor: Not Supported 00:17:39.458 SGL Metadata Pointer: Not Supported 00:17:39.458 Oversized SGL: Not Supported 00:17:39.458 SGL Metadata Address: Not Supported 00:17:39.458 SGL Offset: Supported 00:17:39.458 Transport SGL Data Block: Not Supported 00:17:39.458 Replay Protected Memory Block: Not Supported 00:17:39.458 00:17:39.458 Firmware Slot Information 00:17:39.458 ========================= 00:17:39.458 Active slot: 0 00:17:39.458 00:17:39.458 00:17:39.458 Error Log 00:17:39.458 ========= 00:17:39.458 00:17:39.458 Active Namespaces 00:17:39.458 ================= 00:17:39.458 Discovery Log Page 00:17:39.458 ================== 00:17:39.458 Generation Counter: 2 00:17:39.458 Number of Records: 2 00:17:39.458 Record Format: 0 00:17:39.458 00:17:39.458 Discovery Log Entry 0 00:17:39.458 ---------------------- 00:17:39.458 Transport Type: 3 (TCP) 00:17:39.458 Address Family: 1 (IPv4) 00:17:39.458 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:39.458 Entry Flags: 00:17:39.458 Duplicate Returned Information: 1 00:17:39.458 Explicit Persistent Connection Support for Discovery: 1 00:17:39.458 Transport Requirements: 00:17:39.458 Secure Channel: Not Required 00:17:39.458 Port ID: 0 (0x0000) 00:17:39.458 Controller ID: 65535 (0xffff) 00:17:39.458 Admin Max SQ Size: 128 00:17:39.458 Transport Service Identifier: 4420 00:17:39.458 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:39.458 Transport Address: 10.0.0.2 00:17:39.458 Discovery Log Entry 1 00:17:39.458 ---------------------- 00:17:39.458 Transport Type: 3 (TCP) 00:17:39.458 Address Family: 1 (IPv4) 00:17:39.458 Subsystem Type: 2 (NVM Subsystem) 00:17:39.458 Entry Flags: 00:17:39.458 Duplicate Returned Information: 0 00:17:39.458 Explicit Persistent Connection Support for Discovery: 0 00:17:39.458 Transport Requirements: 00:17:39.458 Secure Channel: Not Required 00:17:39.458 Port ID: 0 (0x0000) 00:17:39.458 Controller ID: 65535 (0xffff) 00:17:39.458 Admin Max SQ Size: 128 00:17:39.458 Transport Service Identifier: 4420 00:17:39.458 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:39.458 Transport Address: 10.0.0.2 [2024-07-23 04:13:32.406430] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:39.458 [2024-07-23 04:13:32.406443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd908c0) on tqpair=0xd49a90 00:17:39.458 [2024-07-23 04:13:32.406449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.458 [2024-07-23 04:13:32.406455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90a40) on tqpair=0xd49a90 00:17:39.458 [2024-07-23 04:13:32.406459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.458 [2024-07-23 04:13:32.406464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90bc0) on tqpair=0xd49a90 00:17:39.458 [2024-07-23 04:13:32.406469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.458 [2024-07-23 04:13:32.406474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.458 [2024-07-23 04:13:32.406478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.458 [2024-07-23 04:13:32.406487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.458 [2024-07-23 04:13:32.406503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.458 [2024-07-23 04:13:32.406524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.458 [2024-07-23 04:13:32.406570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.458 [2024-07-23 04:13:32.406576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.458 [2024-07-23 04:13:32.406580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.458 [2024-07-23 04:13:32.406592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.458 [2024-07-23 04:13:32.406607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.458 [2024-07-23 04:13:32.406628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.458 [2024-07-23 04:13:32.406685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.458 [2024-07-23 04:13:32.406691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.458 [2024-07-23 04:13:32.406695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406699] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.458 [2024-07-23 04:13:32.406708] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:39.458 [2024-07-23 04:13:32.406713] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:39.458 [2024-07-23 04:13:32.406723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.458 [2024-07-23 04:13:32.406739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.458 [2024-07-23 04:13:32.406756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.458 [2024-07-23 04:13:32.406804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.458 [2024-07-23 04:13:32.406811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.458 [2024-07-23 04:13:32.406814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.458 [2024-07-23 04:13:32.406818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.406829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.406834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.406837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.406844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.406861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.406920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.406929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.406932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.406936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.406947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.406952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.406955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.406963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407614] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.407895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.407901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.407905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.407919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.407940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.407947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.407966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.408012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.408019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.408022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.408027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.459 [2024-07-23 04:13:32.408037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.408042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.459 [2024-07-23 04:13:32.408045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.459 [2024-07-23 04:13:32.408053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.459 [2024-07-23 04:13:32.408069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.459 [2024-07-23 04:13:32.408110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.459 [2024-07-23 04:13:32.408118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.459 [2024-07-23 04:13:32.408121] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.408151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.408168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.408216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.408227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.408231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.408262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.408279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.408319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.408325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.408329] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.408358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.408374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.408417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.408423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.408427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.408456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.408472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.408517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.408528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.408532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.408563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.408580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.408622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.408629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.408632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.408662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.408678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.408726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.408732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.408736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.408765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.408781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.408823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.408829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.408833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.408862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.408878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.408954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.408962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.408966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.408988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.408997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.409005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.409023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.409079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.409091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.409095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.409099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.409111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.409116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.409120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.409128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.409146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.460 [2024-07-23 04:13:32.409194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.460 [2024-07-23 04:13:32.409201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.460 [2024-07-23 04:13:32.409205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.409209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.460 [2024-07-23 04:13:32.409220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.409225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.460 [2024-07-23 04:13:32.409229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.460 [2024-07-23 04:13:32.409236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.460 [2024-07-23 04:13:32.409253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.409334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.409340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.409344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.409357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.409373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.409389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.409434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.409441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.409444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.409458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.409474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.409490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.409538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.409544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.409548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.409562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.409578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.409594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.409639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.409646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.409649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.409663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.409679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.409695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.409737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.409745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.409748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.409762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.409778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.409794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.409837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.409843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.409847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.409861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.409876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.409893] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.409939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.409947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.409950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.409965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.409973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.409980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.409999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.410043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.410049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.410053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.410057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.410067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.410072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.410076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.410083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.410099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.461 [2024-07-23 04:13:32.410140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.461 [2024-07-23 04:13:32.410146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.461 [2024-07-23 04:13:32.410150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.410154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.461 [2024-07-23 04:13:32.410164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.410169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.461 [2024-07-23 04:13:32.410172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.461 [2024-07-23 04:13:32.410179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.461 [2024-07-23 04:13:32.410196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.410244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.410250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.410254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.410268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.410283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.410300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.410342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.410350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.410353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.410367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.410383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.410400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.410445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.410452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.410456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.410470] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.410485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.410502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.410550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.410556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.410560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.410573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.410589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.410605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.410653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.410660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.410663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.410677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.410693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.410709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.410759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.410766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.410769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.410783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.410798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.410815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.410857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.410864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.410867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.410881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.410890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.410907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.410943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.411018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.411026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.411030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.411046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.411062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.411081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.411126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.411133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.411137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.411152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.411169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.411186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.411229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.411251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.411254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.411268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.411285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.411318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.411364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.411371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.411374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.411388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.411404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.462 [2024-07-23 04:13:32.411420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.462 [2024-07-23 04:13:32.411462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.462 [2024-07-23 04:13:32.411469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.462 [2024-07-23 04:13:32.411472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.462 [2024-07-23 04:13:32.411486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.462 [2024-07-23 04:13:32.411495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.462 [2024-07-23 04:13:32.411502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.463 [2024-07-23 04:13:32.411518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.463 [2024-07-23 04:13:32.411563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.411569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.411573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.463 [2024-07-23 04:13:32.411587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.463 [2024-07-23 04:13:32.411602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.463 [2024-07-23 04:13:32.411619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.463 [2024-07-23 04:13:32.411665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.411672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.411676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.463 [2024-07-23 04:13:32.411690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.463 [2024-07-23 04:13:32.411705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.463 [2024-07-23 04:13:32.411721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.463 [2024-07-23 04:13:32.411769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.411775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.411779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.463 [2024-07-23 04:13:32.411792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.411801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.463 [2024-07-23 04:13:32.411808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.463 [2024-07-23 04:13:32.411824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.463 [2024-07-23 04:13:32.411875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.411886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.411890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.415947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.463 [2024-07-23 04:13:32.415983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.415989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.415992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49a90) 00:17:39.463 [2024-07-23 04:13:32.416001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.463 [2024-07-23 04:13:32.416025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd90d40, cid 3, qid 0 00:17:39.463 [2024-07-23 04:13:32.416077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.416084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.416088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.416092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd90d40) on tqpair=0xd49a90 00:17:39.463 [2024-07-23 04:13:32.416100] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 9 milliseconds 00:17:39.463 00:17:39.463 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:39.463 [2024-07-23 04:13:32.453651] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:39.463 [2024-07-23 04:13:32.453701] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90374 ] 00:17:39.463 [2024-07-23 04:13:32.575212] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:39.463 [2024-07-23 04:13:32.592466] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:39.463 [2024-07-23 04:13:32.592534] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:39.463 [2024-07-23 04:13:32.592540] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:39.463 [2024-07-23 04:13:32.592549] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:39.463 [2024-07-23 04:13:32.592556] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:39.463 [2024-07-23 04:13:32.592642] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:39.463 [2024-07-23 04:13:32.592679] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d63a90 0 00:17:39.463 [2024-07-23 04:13:32.597913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:39.463 [2024-07-23 04:13:32.597935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:39.463 [2024-07-23 04:13:32.597956] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:39.463 [2024-07-23 04:13:32.597959] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:39.463 [2024-07-23 04:13:32.597994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.598000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.598004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.463 [2024-07-23 04:13:32.598015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:39.463 [2024-07-23 04:13:32.598041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.463 [2024-07-23 04:13:32.605909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.605932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.605953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.605958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.463 [2024-07-23 04:13:32.605971] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:39.463 [2024-07-23 04:13:32.605978] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:39.463 [2024-07-23 04:13:32.605985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:39.463 [2024-07-23 04:13:32.606000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.463 [2024-07-23 04:13:32.606018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.463 [2024-07-23 04:13:32.606043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.463 [2024-07-23 04:13:32.606097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.606103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.606106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.463 [2024-07-23 04:13:32.606116] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:39.463 [2024-07-23 04:13:32.606123] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:39.463 [2024-07-23 04:13:32.606130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.463 [2024-07-23 04:13:32.606145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.463 [2024-07-23 04:13:32.606162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.463 [2024-07-23 04:13:32.606503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.606518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.606523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.463 [2024-07-23 04:13:32.606533] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:39.463 [2024-07-23 04:13:32.606542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:39.463 [2024-07-23 04:13:32.606550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.463 [2024-07-23 04:13:32.606565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.463 [2024-07-23 04:13:32.606585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.463 [2024-07-23 04:13:32.606636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.463 [2024-07-23 04:13:32.606643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.463 [2024-07-23 04:13:32.606646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.463 [2024-07-23 04:13:32.606651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.464 [2024-07-23 04:13:32.606656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:39.464 [2024-07-23 04:13:32.606666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.606671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.606674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.606682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.464 [2024-07-23 04:13:32.606698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.464 [2024-07-23 04:13:32.606778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.464 [2024-07-23 04:13:32.606785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.464 [2024-07-23 04:13:32.606789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.606793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.464 [2024-07-23 04:13:32.606798] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:39.464 [2024-07-23 04:13:32.606803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:39.464 [2024-07-23 04:13:32.606811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:39.464 [2024-07-23 04:13:32.606921] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:39.464 [2024-07-23 04:13:32.606927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:39.464 [2024-07-23 04:13:32.606936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.606941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.606945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.606952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.464 [2024-07-23 04:13:32.606973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.464 [2024-07-23 04:13:32.607141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.464 [2024-07-23 04:13:32.607153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.464 [2024-07-23 04:13:32.607158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.607162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.464 [2024-07-23 04:13:32.607168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:39.464 [2024-07-23 04:13:32.607182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.607188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.607192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.607199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.464 [2024-07-23 04:13:32.607219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.464 [2024-07-23 04:13:32.607587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.464 [2024-07-23 04:13:32.607602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.464 [2024-07-23 04:13:32.607606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.607611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.464 [2024-07-23 04:13:32.607616] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:39.464 [2024-07-23 04:13:32.607621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:39.464 [2024-07-23 04:13:32.607630] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:39.464 [2024-07-23 04:13:32.607640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:39.464 [2024-07-23 04:13:32.607649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.607654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.607662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.464 [2024-07-23 04:13:32.607681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.464 [2024-07-23 04:13:32.608072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.464 [2024-07-23 04:13:32.608087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.464 [2024-07-23 04:13:32.608092] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608096] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d63a90): datao=0, datal=4096, cccid=0 00:17:39.464 [2024-07-23 04:13:32.608101] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daa8c0) on tqpair(0x1d63a90): expected_datao=0, payload_size=4096 00:17:39.464 [2024-07-23 04:13:32.608106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608114] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608119] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.464 [2024-07-23 04:13:32.608134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.464 [2024-07-23 04:13:32.608138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.464 [2024-07-23 04:13:32.608151] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:39.464 [2024-07-23 04:13:32.608156] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:39.464 [2024-07-23 04:13:32.608161] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:39.464 [2024-07-23 04:13:32.608165] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:39.464 [2024-07-23 04:13:32.608170] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:39.464 [2024-07-23 04:13:32.608175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:39.464 [2024-07-23 04:13:32.608184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:39.464 [2024-07-23 04:13:32.608192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.608209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:39.464 [2024-07-23 04:13:32.608230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.464 [2024-07-23 04:13:32.608571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.464 [2024-07-23 04:13:32.608585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.464 [2024-07-23 04:13:32.608590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.464 [2024-07-23 04:13:32.608606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.608622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.464 [2024-07-23 04:13:32.608629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.608642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.464 [2024-07-23 04:13:32.608649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.608662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.464 [2024-07-23 04:13:32.608668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.608682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.464 [2024-07-23 04:13:32.608687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:39.464 [2024-07-23 04:13:32.608696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:39.464 [2024-07-23 04:13:32.608703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.464 [2024-07-23 04:13:32.608707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d63a90) 00:17:39.464 [2024-07-23 04:13:32.608714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.464 [2024-07-23 04:13:32.608734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daa8c0, cid 0, qid 0 00:17:39.464 [2024-07-23 04:13:32.608741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daaa40, cid 1, qid 0 00:17:39.464 [2024-07-23 04:13:32.608746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daabc0, cid 2, qid 0 00:17:39.464 [2024-07-23 04:13:32.608751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daad40, cid 3, qid 0 00:17:39.464 [2024-07-23 04:13:32.608755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daaec0, cid 4, qid 0 00:17:39.464 [2024-07-23 04:13:32.609155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.464 [2024-07-23 04:13:32.609171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.465 [2024-07-23 04:13:32.609176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.609181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daaec0) on tqpair=0x1d63a90 00:17:39.465 [2024-07-23 04:13:32.609190] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:39.465 [2024-07-23 04:13:32.609197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.609207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.609213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.609221] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.609226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.609230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d63a90) 00:17:39.465 [2024-07-23 04:13:32.609237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:39.465 [2024-07-23 04:13:32.609258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daaec0, cid 4, qid 0 00:17:39.465 [2024-07-23 04:13:32.609440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.465 [2024-07-23 04:13:32.609447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.465 [2024-07-23 04:13:32.609451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.609455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daaec0) on tqpair=0x1d63a90 00:17:39.465 [2024-07-23 04:13:32.609516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.609528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.609536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.609540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d63a90) 00:17:39.465 [2024-07-23 04:13:32.609548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.465 [2024-07-23 04:13:32.609567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daaec0, cid 4, qid 0 00:17:39.465 [2024-07-23 04:13:32.609859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.465 [2024-07-23 04:13:32.609874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.465 [2024-07-23 04:13:32.609879] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.609883] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d63a90): datao=0, datal=4096, cccid=4 00:17:39.465 [2024-07-23 04:13:32.609888] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daaec0) on tqpair(0x1d63a90): expected_datao=0, payload_size=4096 00:17:39.465 [2024-07-23 04:13:32.613930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.613953] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.613975] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.613985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.465 [2024-07-23 04:13:32.613991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.465 [2024-07-23 04:13:32.613995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.613999] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daaec0) on tqpair=0x1d63a90 00:17:39.465 [2024-07-23 04:13:32.614010] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:39.465 [2024-07-23 04:13:32.614023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d63a90) 00:17:39.465 [2024-07-23 04:13:32.614056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.465 [2024-07-23 04:13:32.614080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daaec0, cid 4, qid 0 00:17:39.465 [2024-07-23 04:13:32.614150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.465 [2024-07-23 04:13:32.614156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.465 [2024-07-23 04:13:32.614160] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614163] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d63a90): datao=0, datal=4096, cccid=4 00:17:39.465 [2024-07-23 04:13:32.614168] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daaec0) on tqpair(0x1d63a90): expected_datao=0, payload_size=4096 00:17:39.465 [2024-07-23 04:13:32.614172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614179] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614183] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.465 [2024-07-23 04:13:32.614222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.465 [2024-07-23 04:13:32.614242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daaec0) on tqpair=0x1d63a90 00:17:39.465 [2024-07-23 04:13:32.614261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d63a90) 00:17:39.465 [2024-07-23 04:13:32.614292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.465 [2024-07-23 04:13:32.614311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daaec0, cid 4, qid 0 00:17:39.465 [2024-07-23 04:13:32.614674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.465 [2024-07-23 04:13:32.614689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.465 [2024-07-23 04:13:32.614694] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614698] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d63a90): datao=0, datal=4096, cccid=4 00:17:39.465 [2024-07-23 04:13:32.614702] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daaec0) on tqpair(0x1d63a90): expected_datao=0, payload_size=4096 00:17:39.465 [2024-07-23 04:13:32.614707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614714] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614718] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.465 [2024-07-23 04:13:32.614733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.465 [2024-07-23 04:13:32.614737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daaec0) on tqpair=0x1d63a90 00:17:39.465 [2024-07-23 04:13:32.614749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614792] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:39.465 [2024-07-23 04:13:32.614796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:39.465 [2024-07-23 04:13:32.614802] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:39.465 [2024-07-23 04:13:32.614816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.465 [2024-07-23 04:13:32.614821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.614828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.466 [2024-07-23 04:13:32.614835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.614839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.614843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.614849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.466 [2024-07-23 04:13:32.614873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daaec0, cid 4, qid 0 00:17:39.466 [2024-07-23 04:13:32.614880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dab040, cid 5, qid 0 00:17:39.466 [2024-07-23 04:13:32.615283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.466 [2024-07-23 04:13:32.615300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.466 [2024-07-23 04:13:32.615319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daaec0) on tqpair=0x1d63a90 00:17:39.466 [2024-07-23 04:13:32.615331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.466 [2024-07-23 04:13:32.615336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.466 [2024-07-23 04:13:32.615340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dab040) on tqpair=0x1d63a90 00:17:39.466 [2024-07-23 04:13:32.615355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615360] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.615367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.466 [2024-07-23 04:13:32.615388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dab040, cid 5, qid 0 00:17:39.466 [2024-07-23 04:13:32.615503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.466 [2024-07-23 04:13:32.615510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.466 [2024-07-23 04:13:32.615513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dab040) on tqpair=0x1d63a90 00:17:39.466 [2024-07-23 04:13:32.615528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.615539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.466 [2024-07-23 04:13:32.615556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dab040, cid 5, qid 0 00:17:39.466 [2024-07-23 04:13:32.615709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.466 [2024-07-23 04:13:32.615720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.466 [2024-07-23 04:13:32.615724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dab040) on tqpair=0x1d63a90 00:17:39.466 [2024-07-23 04:13:32.615739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.615751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.466 [2024-07-23 04:13:32.615768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dab040, cid 5, qid 0 00:17:39.466 [2024-07-23 04:13:32.615848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.466 [2024-07-23 04:13:32.615854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.466 [2024-07-23 04:13:32.615858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dab040) on tqpair=0x1d63a90 00:17:39.466 [2024-07-23 04:13:32.615879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.615910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.466 [2024-07-23 04:13:32.615936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.615948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.466 [2024-07-23 04:13:32.615955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.615965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.466 [2024-07-23 04:13:32.615973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.615977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d63a90) 00:17:39.466 [2024-07-23 04:13:32.615984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.466 [2024-07-23 04:13:32.616006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dab040, cid 5, qid 0 00:17:39.466 [2024-07-23 04:13:32.616013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daaec0, cid 4, qid 0 00:17:39.466 [2024-07-23 04:13:32.616018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dab1c0, cid 6, qid 0 00:17:39.466 [2024-07-23 04:13:32.616023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dab340, cid 7, qid 0 00:17:39.466 [2024-07-23 04:13:32.616479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.466 [2024-07-23 04:13:32.616508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.466 [2024-07-23 04:13:32.616528] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616532] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d63a90): datao=0, datal=8192, cccid=5 00:17:39.466 [2024-07-23 04:13:32.616537] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dab040) on tqpair(0x1d63a90): expected_datao=0, payload_size=8192 00:17:39.466 [2024-07-23 04:13:32.616542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616558] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616563] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.466 [2024-07-23 04:13:32.616575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.466 [2024-07-23 04:13:32.616579] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616583] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d63a90): datao=0, datal=512, cccid=4 00:17:39.466 [2024-07-23 04:13:32.616587] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1daaec0) on tqpair(0x1d63a90): expected_datao=0, payload_size=512 00:17:39.466 [2024-07-23 04:13:32.616592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616598] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616602] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.466 [2024-07-23 04:13:32.616613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.466 [2024-07-23 04:13:32.616617] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616621] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d63a90): datao=0, datal=512, cccid=6 00:17:39.466 [2024-07-23 04:13:32.616625] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dab1c0) on tqpair(0x1d63a90): expected_datao=0, payload_size=512 00:17:39.466 [2024-07-23 04:13:32.616629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616636] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616639] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.466 [2024-07-23 04:13:32.616645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:39.466 ===================================================== 00:17:39.466 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.466 ===================================================== 00:17:39.466 Controller Capabilities/Features 00:17:39.466 ================================ 00:17:39.466 Vendor ID: 8086 00:17:39.466 Subsystem Vendor ID: 8086 00:17:39.466 Serial Number: SPDK00000000000001 00:17:39.466 Model Number: SPDK bdev Controller 00:17:39.466 Firmware Version: 24.09 00:17:39.466 Recommended Arb Burst: 6 00:17:39.466 IEEE OUI Identifier: e4 d2 5c 00:17:39.466 Multi-path I/O 00:17:39.466 May have multiple subsystem ports: Yes 00:17:39.466 May have multiple controllers: Yes 00:17:39.466 Associated with SR-IOV VF: No 00:17:39.466 Max Data Transfer Size: 131072 00:17:39.466 Max Number of Namespaces: 32 00:17:39.466 Max Number of I/O Queues: 127 00:17:39.466 NVMe Specification Version (VS): 1.3 00:17:39.466 NVMe Specification Version (Identify): 1.3 00:17:39.466 Maximum Queue Entries: 128 00:17:39.466 Contiguous Queues Required: Yes 00:17:39.466 Arbitration Mechanisms Supported 00:17:39.466 Weighted Round Robin: Not Supported 00:17:39.466 Vendor Specific: Not Supported 00:17:39.466 Reset Timeout: 15000 ms 00:17:39.466 Doorbell Stride: 4 bytes 00:17:39.466 NVM Subsystem Reset: Not Supported 00:17:39.466 Command Sets Supported 00:17:39.466 NVM Command Set: Supported 00:17:39.466 Boot Partition: Not Supported 00:17:39.466 Memory Page Size Minimum: 4096 bytes 00:17:39.466 Memory Page Size Maximum: 4096 bytes 00:17:39.466 Persistent Memory Region: Not Supported 00:17:39.466 Optional Asynchronous Events Supported 00:17:39.466 Namespace Attribute Notices: Supported 00:17:39.466 Firmware Activation Notices: Not Supported 00:17:39.466 ANA Change Notices: Not Supported 00:17:39.466 PLE Aggregate Log Change Notices: Not Supported 00:17:39.466 LBA Status Info Alert Notices: Not Supported 00:17:39.466 EGE Aggregate Log Change Notices: Not Supported 00:17:39.467 Normal NVM Subsystem Shutdown event: Not Supported 00:17:39.467 Zone Descriptor Change Notices: Not Supported 00:17:39.467 Discovery Log Change Notices: Not Supported 00:17:39.467 Controller Attributes 00:17:39.467 128-bit Host Identifier: Supported 00:17:39.467 Non-Operational Permissive Mode: Not Supported 00:17:39.467 NVM Sets: Not Supported 00:17:39.467 Read Recovery Levels: Not Supported 00:17:39.467 Endurance Groups: Not Supported 00:17:39.467 Predictable Latency Mode: Not Supported 00:17:39.467 Traffic Based Keep ALive: Not Supported 00:17:39.467 Namespace Granularity: Not Supported 00:17:39.467 SQ Associations: Not Supported 00:17:39.467 UUID List: Not Supported 00:17:39.467 Multi-Domain Subsystem: Not Supported 00:17:39.467 Fixed Capacity Management: Not Supported 00:17:39.467 Variable Capacity Management: Not Supported 00:17:39.467 Delete Endurance Group: Not Supported 00:17:39.467 Delete NVM Set: Not Supported 00:17:39.467 Extended LBA Formats Supported: Not Supported 00:17:39.467 Flexible Data Placement Supported: Not Supported 00:17:39.467 00:17:39.467 Controller Memory Buffer Support 00:17:39.467 ================================ 00:17:39.467 Supported: No 00:17:39.467 00:17:39.467 Persistent Memory Region Support 00:17:39.467 ================================ 00:17:39.467 Supported: No 00:17:39.467 00:17:39.467 Admin Command Set Attributes 00:17:39.467 ============================ 00:17:39.467 Security Send/Receive: Not Supported 00:17:39.467 Format NVM: Not Supported 00:17:39.467 Firmware Activate/Download: Not Supported 00:17:39.467 Namespace Management: Not Supported 00:17:39.467 Device Self-Test: Not Supported 00:17:39.467 Directives: Not Supported 00:17:39.467 NVMe-MI: Not Supported 00:17:39.467 Virtualization Management: Not Supported 00:17:39.467 Doorbell Buffer Config: Not Supported 00:17:39.467 Get LBA Status Capability: Not Supported 00:17:39.467 Command & Feature Lockdown Capability: Not Supported 00:17:39.467 Abort Command Limit: 4 00:17:39.467 Async Event Request Limit: 4 00:17:39.467 Number of Firmware Slots: N/A 00:17:39.467 Firmware Slot 1 Read-Only: N/A 00:17:39.467 Firmware Activation Without Reset: [2024-07-23 04:13:32.616650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:39.467 [2024-07-23 04:13:32.616654] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:39.467 [2024-07-23 04:13:32.616658] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d63a90): datao=0, datal=4096, cccid=7 00:17:39.467 [2024-07-23 04:13:32.616662] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dab340) on tqpair(0x1d63a90): expected_datao=0, payload_size=4096 00:17:39.467 [2024-07-23 04:13:32.616667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.467 [2024-07-23 04:13:32.616673] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:39.467 [2024-07-23 04:13:32.616677] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:39.467 [2024-07-23 04:13:32.616685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.467 [2024-07-23 04:13:32.616691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.467 [2024-07-23 04:13:32.616694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.467 [2024-07-23 04:13:32.616698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dab040) on tqpair=0x1d63a90 00:17:39.467 [2024-07-23 04:13:32.616715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.467 [2024-07-23 04:13:32.616722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.467 [2024-07-23 04:13:32.616725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.467 [2024-07-23 04:13:32.616729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daaec0) on tqpair=0x1d63a90 00:17:39.467 [2024-07-23 04:13:32.616740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.467 [2024-07-23 04:13:32.616746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.467 [2024-07-23 04:13:32.616749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.467 [2024-07-23 04:13:32.616753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dab1c0) on tqpair=0x1d63a90 00:17:39.467 [2024-07-23 04:13:32.616760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.467 [2024-07-23 04:13:32.616766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.467 [2024-07-23 04:13:32.616770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.467 [2024-07-23 04:13:32.616774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dab340) on tqpair=0x1d63a90 00:17:39.467 N/A 00:17:39.467 Multiple Update Detection Support: N/A 00:17:39.467 Firmware Update Granularity: No Information Provided 00:17:39.467 Per-Namespace SMART Log: No 00:17:39.467 Asymmetric Namespace Access Log Page: Not Supported 00:17:39.467 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:39.467 Command Effects Log Page: Supported 00:17:39.467 Get Log Page Extended Data: Supported 00:17:39.467 Telemetry Log Pages: Not Supported 00:17:39.467 Persistent Event Log Pages: Not Supported 00:17:39.467 Supported Log Pages Log Page: May Support 00:17:39.467 Commands Supported & Effects Log Page: Not Supported 00:17:39.467 Feature Identifiers & Effects Log Page:May Support 00:17:39.467 NVMe-MI Commands & Effects Log Page: May Support 00:17:39.467 Data Area 4 for Telemetry Log: Not Supported 00:17:39.467 Error Log Page Entries Supported: 128 00:17:39.467 Keep Alive: Supported 00:17:39.467 Keep Alive Granularity: 10000 ms 00:17:39.467 00:17:39.467 NVM Command Set Attributes 00:17:39.467 ========================== 00:17:39.467 Submission Queue Entry Size 00:17:39.467 Max: 64 00:17:39.467 Min: 64 00:17:39.467 Completion Queue Entry Size 00:17:39.467 Max: 16 00:17:39.467 Min: 16 00:17:39.467 Number of Namespaces: 32 00:17:39.467 Compare Command: Supported 00:17:39.467 Write Uncorrectable Command: Not Supported 00:17:39.467 Dataset Management Command: Supported 00:17:39.467 Write Zeroes Command: Supported 00:17:39.467 Set Features Save Field: Not Supported 00:17:39.467 Reservations: Supported 00:17:39.467 Timestamp: Not Supported 00:17:39.467 Copy: Supported 00:17:39.467 Volatile Write Cache: Present 00:17:39.467 Atomic Write Unit (Normal): 1 00:17:39.467 Atomic Write Unit (PFail): 1 00:17:39.467 Atomic Compare & Write Unit: 1 00:17:39.467 Fused Compare & Write: Supported 00:17:39.467 Scatter-Gather List 00:17:39.467 SGL Command Set: Supported 00:17:39.467 SGL Keyed: Supported 00:17:39.467 SGL Bit Bucket Descriptor: Not Supported 00:17:39.467 SGL Metadata Pointer: Not Supported 00:17:39.467 Oversized SGL: Not Supported 00:17:39.467 SGL Metadata Address: Not Supported 00:17:39.467 SGL Offset: Supported 00:17:39.467 Transport SGL Data Block: Not Supported 00:17:39.467 Replay Protected Memory Block: Not Supported 00:17:39.467 00:17:39.467 Firmware Slot Information 00:17:39.467 ========================= 00:17:39.467 Active slot: 1 00:17:39.467 Slot 1 Firmware Revision: 24.09 00:17:39.467 00:17:39.467 00:17:39.467 Commands Supported and Effects 00:17:39.467 ============================== 00:17:39.467 Admin Commands 00:17:39.467 -------------- 00:17:39.467 Get Log Page (02h): Supported 00:17:39.467 Identify (06h): Supported 00:17:39.467 Abort (08h): Supported 00:17:39.467 Set Features (09h): Supported 00:17:39.467 Get Features (0Ah): Supported 00:17:39.467 Asynchronous Event Request (0Ch): Supported 00:17:39.467 Keep Alive (18h): Supported 00:17:39.467 I/O Commands 00:17:39.467 ------------ 00:17:39.467 Flush (00h): Supported LBA-Change 00:17:39.467 Write (01h): Supported LBA-Change 00:17:39.467 Read (02h): Supported 00:17:39.467 Compare (05h): Supported 00:17:39.467 Write Zeroes (08h): Supported LBA-Change 00:17:39.467 Dataset Management (09h): Supported LBA-Change 00:17:39.467 Copy (19h): Supported LBA-Change 00:17:39.467 00:17:39.467 Error Log 00:17:39.467 ========= 00:17:39.467 00:17:39.467 Arbitration 00:17:39.467 =========== 00:17:39.467 Arbitration Burst: 1 00:17:39.467 00:17:39.467 Power Management 00:17:39.467 ================ 00:17:39.467 Number of Power States: 1 00:17:39.467 Current Power State: Power State #0 00:17:39.467 Power State #0: 00:17:39.467 Max Power: 0.00 W 00:17:39.467 Non-Operational State: Operational 00:17:39.467 Entry Latency: Not Reported 00:17:39.467 Exit Latency: Not Reported 00:17:39.467 Relative Read Throughput: 0 00:17:39.467 Relative Read Latency: 0 00:17:39.467 Relative Write Throughput: 0 00:17:39.467 Relative Write Latency: 0 00:17:39.467 Idle Power: Not Reported 00:17:39.467 Active Power: Not Reported 00:17:39.467 Non-Operational Permissive Mode: Not Supported 00:17:39.467 00:17:39.467 Health Information 00:17:39.467 ================== 00:17:39.467 Critical Warnings: 00:17:39.467 Available Spare Space: OK 00:17:39.467 Temperature: OK 00:17:39.467 Device Reliability: OK 00:17:39.467 Read Only: No 00:17:39.467 Volatile Memory Backup: OK 00:17:39.467 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:39.467 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:39.467 Available Spare: 0% 00:17:39.467 Available Spare Threshold: 0% 00:17:39.467 Life Percentage Used:[2024-07-23 04:13:32.616868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.616875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d63a90) 00:17:39.468 [2024-07-23 04:13:32.616883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.468 [2024-07-23 04:13:32.616920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dab340, cid 7, qid 0 00:17:39.468 [2024-07-23 04:13:32.617087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.468 [2024-07-23 04:13:32.617095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.468 [2024-07-23 04:13:32.617099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dab340) on tqpair=0x1d63a90 00:17:39.468 [2024-07-23 04:13:32.617144] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:39.468 [2024-07-23 04:13:32.617157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daa8c0) on tqpair=0x1d63a90 00:17:39.468 [2024-07-23 04:13:32.617163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.468 [2024-07-23 04:13:32.617169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daaa40) on tqpair=0x1d63a90 00:17:39.468 [2024-07-23 04:13:32.617174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.468 [2024-07-23 04:13:32.617180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daabc0) on tqpair=0x1d63a90 00:17:39.468 [2024-07-23 04:13:32.617185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.468 [2024-07-23 04:13:32.617190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daad40) on tqpair=0x1d63a90 00:17:39.468 [2024-07-23 04:13:32.617195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.468 [2024-07-23 04:13:32.617204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d63a90) 00:17:39.468 [2024-07-23 04:13:32.617222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.468 [2024-07-23 04:13:32.617245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daad40, cid 3, qid 0 00:17:39.468 [2024-07-23 04:13:32.617628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.468 [2024-07-23 04:13:32.617643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.468 [2024-07-23 04:13:32.617648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daad40) on tqpair=0x1d63a90 00:17:39.468 [2024-07-23 04:13:32.617660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d63a90) 00:17:39.468 [2024-07-23 04:13:32.617676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.468 [2024-07-23 04:13:32.617698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daad40, cid 3, qid 0 00:17:39.468 [2024-07-23 04:13:32.617764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.468 [2024-07-23 04:13:32.617770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.468 [2024-07-23 04:13:32.617774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daad40) on tqpair=0x1d63a90 00:17:39.468 [2024-07-23 04:13:32.617782] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:39.468 [2024-07-23 04:13:32.617787] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:39.468 [2024-07-23 04:13:32.617797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.617806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d63a90) 00:17:39.468 [2024-07-23 04:13:32.617813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.468 [2024-07-23 04:13:32.617829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1daad40, cid 3, qid 0 00:17:39.468 [2024-07-23 04:13:32.621910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:39.468 [2024-07-23 04:13:32.621930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:39.468 [2024-07-23 04:13:32.621952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:39.468 [2024-07-23 04:13:32.621956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1daad40) on tqpair=0x1d63a90 00:17:39.468 [2024-07-23 04:13:32.621967] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:17:39.468 0% 00:17:39.468 Data Units Read: 0 00:17:39.468 Data Units Written: 0 00:17:39.468 Host Read Commands: 0 00:17:39.468 Host Write Commands: 0 00:17:39.468 Controller Busy Time: 0 minutes 00:17:39.468 Power Cycles: 0 00:17:39.468 Power On Hours: 0 hours 00:17:39.468 Unsafe Shutdowns: 0 00:17:39.468 Unrecoverable Media Errors: 0 00:17:39.468 Lifetime Error Log Entries: 0 00:17:39.468 Warning Temperature Time: 0 minutes 00:17:39.468 Critical Temperature Time: 0 minutes 00:17:39.468 00:17:39.468 Number of Queues 00:17:39.468 ================ 00:17:39.468 Number of I/O Submission Queues: 127 00:17:39.468 Number of I/O Completion Queues: 127 00:17:39.468 00:17:39.468 Active Namespaces 00:17:39.468 ================= 00:17:39.468 Namespace ID:1 00:17:39.468 Error Recovery Timeout: Unlimited 00:17:39.468 Command Set Identifier: NVM (00h) 00:17:39.468 Deallocate: Supported 00:17:39.468 Deallocated/Unwritten Error: Not Supported 00:17:39.468 Deallocated Read Value: Unknown 00:17:39.468 Deallocate in Write Zeroes: Not Supported 00:17:39.468 Deallocated Guard Field: 0xFFFF 00:17:39.468 Flush: Supported 00:17:39.468 Reservation: Supported 00:17:39.468 Namespace Sharing Capabilities: Multiple Controllers 00:17:39.468 Size (in LBAs): 131072 (0GiB) 00:17:39.468 Capacity (in LBAs): 131072 (0GiB) 00:17:39.468 Utilization (in LBAs): 131072 (0GiB) 00:17:39.468 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:39.468 EUI64: ABCDEF0123456789 00:17:39.468 UUID: a5418373-d96c-4738-bcf5-83975bb60645 00:17:39.468 Thin Provisioning: Not Supported 00:17:39.468 Per-NS Atomic Units: Yes 00:17:39.468 Atomic Boundary Size (Normal): 0 00:17:39.468 Atomic Boundary Size (PFail): 0 00:17:39.468 Atomic Boundary Offset: 0 00:17:39.468 Maximum Single Source Range Length: 65535 00:17:39.468 Maximum Copy Length: 65535 00:17:39.468 Maximum Source Range Count: 1 00:17:39.468 NGUID/EUI64 Never Reused: No 00:17:39.468 Namespace Write Protected: No 00:17:39.468 Number of LBA Formats: 1 00:17:39.468 Current LBA Format: LBA Format #00 00:17:39.468 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:39.468 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.468 rmmod nvme_tcp 00:17:39.468 rmmod nvme_fabrics 00:17:39.468 rmmod nvme_keyring 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 90337 ']' 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 90337 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 90337 ']' 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 90337 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90337 00:17:39.468 killing process with pid 90337 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90337' 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 90337 00:17:39.468 04:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 90337 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:39.745 00:17:39.745 real 0m1.800s 00:17:39.745 user 0m4.201s 00:17:39.745 sys 0m0.589s 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:39.745 ************************************ 00:17:39.745 END TEST nvmf_identify 00:17:39.745 ************************************ 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.745 04:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.004 ************************************ 00:17:40.004 START TEST nvmf_perf 00:17:40.004 ************************************ 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:40.004 * Looking for test storage... 00:17:40.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.004 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:40.005 Cannot find device "nvmf_tgt_br" 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.005 Cannot find device "nvmf_tgt_br2" 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:40.005 Cannot find device "nvmf_tgt_br" 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:40.005 Cannot find device "nvmf_tgt_br2" 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.005 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:40.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:17:40.263 00:17:40.263 --- 10.0.0.2 ping statistics --- 00:17:40.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.263 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:40.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:40.263 00:17:40.263 --- 10.0.0.3 ping statistics --- 00:17:40.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.263 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:40.263 00:17:40.263 --- 10.0.0.1 ping statistics --- 00:17:40.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.263 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=90541 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 90541 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 90541 ']' 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.263 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:40.263 [2024-07-23 04:13:33.598807] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:17:40.263 [2024-07-23 04:13:33.598878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.521 [2024-07-23 04:13:33.716146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:40.521 [2024-07-23 04:13:33.730390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.521 [2024-07-23 04:13:33.786788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.521 [2024-07-23 04:13:33.786852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.521 [2024-07-23 04:13:33.786862] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.521 [2024-07-23 04:13:33.786869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.521 [2024-07-23 04:13:33.786875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.521 [2024-07-23 04:13:33.787063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.521 [2024-07-23 04:13:33.787350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.521 [2024-07-23 04:13:33.787352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.521 [2024-07-23 04:13:33.787174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.521 [2024-07-23 04:13:33.838961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:40.779 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.780 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:40.780 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.780 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:40.780 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.780 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:40.780 04:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:41.038 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:41.038 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:41.297 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:41.555 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:41.813 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:41.813 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:41.813 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:41.813 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:41.813 04:13:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:42.072 [2024-07-23 04:13:35.208590] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.072 04:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:42.330 04:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:42.330 04:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:42.330 04:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:42.330 04:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:42.897 04:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.897 [2024-07-23 04:13:36.161926] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.897 04:13:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:43.155 04:13:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:43.155 04:13:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:43.155 04:13:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:43.155 04:13:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:44.530 Initializing NVMe Controllers 00:17:44.530 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:44.530 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:44.530 Initialization complete. Launching workers. 00:17:44.530 ======================================================== 00:17:44.530 Latency(us) 00:17:44.530 Device Information : IOPS MiB/s Average min max 00:17:44.530 PCIE (0000:00:10.0) NSID 1 from core 0: 21792.00 85.12 1467.74 396.56 8066.19 00:17:44.530 ======================================================== 00:17:44.530 Total : 21792.00 85.12 1467.74 396.56 8066.19 00:17:44.530 00:17:44.531 04:13:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:45.465 Initializing NVMe Controllers 00:17:45.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:45.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:45.465 Initialization complete. Launching workers. 00:17:45.465 ======================================================== 00:17:45.465 Latency(us) 00:17:45.466 Device Information : IOPS MiB/s Average min max 00:17:45.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4125.50 16.12 242.13 95.55 6143.31 00:17:45.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.74 0.49 8015.55 5979.04 12024.11 00:17:45.466 ======================================================== 00:17:45.466 Total : 4251.24 16.61 472.05 95.55 12024.11 00:17:45.466 00:17:45.724 04:13:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:47.101 Initializing NVMe Controllers 00:17:47.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:47.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:47.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:47.101 Initialization complete. Launching workers. 00:17:47.101 ======================================================== 00:17:47.101 Latency(us) 00:17:47.101 Device Information : IOPS MiB/s Average min max 00:17:47.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9495.00 37.09 3374.32 488.73 6992.52 00:17:47.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4028.00 15.73 7996.76 6428.91 14152.14 00:17:47.101 ======================================================== 00:17:47.101 Total : 13523.00 52.82 4751.17 488.73 14152.14 00:17:47.101 00:17:47.101 04:13:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:47.101 04:13:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:49.638 Initializing NVMe Controllers 00:17:49.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.639 Controller IO queue size 128, less than required. 00:17:49.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.639 Controller IO queue size 128, less than required. 00:17:49.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:49.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:49.639 Initialization complete. Launching workers. 00:17:49.639 ======================================================== 00:17:49.639 Latency(us) 00:17:49.639 Device Information : IOPS MiB/s Average min max 00:17:49.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1876.33 469.08 69347.69 38752.03 127548.40 00:17:49.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 677.94 169.48 192041.32 79276.07 315473.92 00:17:49.639 ======================================================== 00:17:49.639 Total : 2554.26 638.57 101912.29 38752.03 315473.92 00:17:49.639 00:17:49.639 04:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:49.639 Initializing NVMe Controllers 00:17:49.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.639 Controller IO queue size 128, less than required. 00:17:49.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.639 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:49.639 Controller IO queue size 128, less than required. 00:17:49.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.639 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:49.639 WARNING: Some requested NVMe devices were skipped 00:17:49.639 No valid NVMe controllers or AIO or URING devices found 00:17:49.897 04:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:52.428 Initializing NVMe Controllers 00:17:52.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:52.428 Controller IO queue size 128, less than required. 00:17:52.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:52.428 Controller IO queue size 128, less than required. 00:17:52.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:52.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:52.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:52.428 Initialization complete. Launching workers. 00:17:52.428 00:17:52.428 ==================== 00:17:52.428 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:52.428 TCP transport: 00:17:52.428 polls: 11755 00:17:52.428 idle_polls: 8435 00:17:52.428 sock_completions: 3320 00:17:52.428 nvme_completions: 6387 00:17:52.428 submitted_requests: 9556 00:17:52.428 queued_requests: 1 00:17:52.428 00:17:52.428 ==================== 00:17:52.428 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:52.428 TCP transport: 00:17:52.428 polls: 11999 00:17:52.428 idle_polls: 8169 00:17:52.428 sock_completions: 3830 00:17:52.428 nvme_completions: 6777 00:17:52.428 submitted_requests: 10182 00:17:52.428 queued_requests: 1 00:17:52.428 ======================================================== 00:17:52.428 Latency(us) 00:17:52.428 Device Information : IOPS MiB/s Average min max 00:17:52.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1596.32 399.08 81196.89 34320.33 129748.23 00:17:52.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1693.81 423.45 76276.82 33980.21 118392.23 00:17:52.428 ======================================================== 00:17:52.428 Total : 3290.13 822.53 78663.96 33980.21 129748.23 00:17:52.428 00:17:52.428 04:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:52.428 04:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.687 04:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:17:52.687 04:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:17:52.687 04:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=5d8f931f-485e-4e93-819d-f99d28048bfc 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 5d8f931f-485e-4e93-819d-f99d28048bfc 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5d8f931f-485e-4e93-819d-f99d28048bfc 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:17:52.946 { 00:17:52.946 "uuid": "5d8f931f-485e-4e93-819d-f99d28048bfc", 00:17:52.946 "name": "lvs_0", 00:17:52.946 "base_bdev": "Nvme0n1", 00:17:52.946 "total_data_clusters": 1278, 00:17:52.946 "free_clusters": 1278, 00:17:52.946 "block_size": 4096, 00:17:52.946 "cluster_size": 4194304 00:17:52.946 } 00:17:52.946 ]' 00:17:52.946 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5d8f931f-485e-4e93-819d-f99d28048bfc") .free_clusters' 00:17:53.205 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:17:53.205 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5d8f931f-485e-4e93-819d-f99d28048bfc") .cluster_size' 00:17:53.205 5112 00:17:53.205 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:17:53.205 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:17:53.205 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:17:53.205 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:17:53.205 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d8f931f-485e-4e93-819d-f99d28048bfc lbd_0 5112 00:17:53.463 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=f6d7b255-c037-4ba1-bd01-db7815b5a337 00:17:53.463 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore f6d7b255-c037-4ba1-bd01-db7815b5a337 lvs_n_0 00:17:53.722 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=9860c1bc-6e59-4461-85c5-9eab556fbe68 00:17:53.722 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 9860c1bc-6e59-4461-85c5-9eab556fbe68 00:17:53.722 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=9860c1bc-6e59-4461-85c5-9eab556fbe68 00:17:53.722 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:17:53.722 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:17:53.722 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:17:53.722 04:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:17:53.981 { 00:17:53.981 "uuid": "5d8f931f-485e-4e93-819d-f99d28048bfc", 00:17:53.981 "name": "lvs_0", 00:17:53.981 "base_bdev": "Nvme0n1", 00:17:53.981 "total_data_clusters": 1278, 00:17:53.981 "free_clusters": 0, 00:17:53.981 "block_size": 4096, 00:17:53.981 "cluster_size": 4194304 00:17:53.981 }, 00:17:53.981 { 00:17:53.981 "uuid": "9860c1bc-6e59-4461-85c5-9eab556fbe68", 00:17:53.981 "name": "lvs_n_0", 00:17:53.981 "base_bdev": "f6d7b255-c037-4ba1-bd01-db7815b5a337", 00:17:53.981 "total_data_clusters": 1276, 00:17:53.981 "free_clusters": 1276, 00:17:53.981 "block_size": 4096, 00:17:53.981 "cluster_size": 4194304 00:17:53.981 } 00:17:53.981 ]' 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9860c1bc-6e59-4461-85c5-9eab556fbe68") .free_clusters' 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9860c1bc-6e59-4461-85c5-9eab556fbe68") .cluster_size' 00:17:53.981 5104 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:17:53.981 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9860c1bc-6e59-4461-85c5-9eab556fbe68 lbd_nest_0 5104 00:17:54.239 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=18957a28-b881-40c0-862f-5b1bd8e4076a 00:17:54.239 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.497 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:17:54.497 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 18957a28-b881-40c0-862f-5b1bd8e4076a 00:17:54.756 04:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.014 04:13:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:17:55.015 04:13:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:17:55.015 04:13:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:55.015 04:13:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:55.015 04:13:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:55.273 Initializing NVMe Controllers 00:17:55.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.273 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:55.273 WARNING: Some requested NVMe devices were skipped 00:17:55.273 No valid NVMe controllers or AIO or URING devices found 00:17:55.273 04:13:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:55.273 04:13:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:07.507 Initializing NVMe Controllers 00:18:07.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:07.508 Initialization complete. Launching workers. 00:18:07.508 ======================================================== 00:18:07.508 Latency(us) 00:18:07.508 Device Information : IOPS MiB/s Average min max 00:18:07.508 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 950.00 118.75 1052.27 346.65 8405.13 00:18:07.508 ======================================================== 00:18:07.508 Total : 950.00 118.75 1052.27 346.65 8405.13 00:18:07.508 00:18:07.508 04:13:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:07.508 04:13:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:07.508 04:13:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:07.508 Initializing NVMe Controllers 00:18:07.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.508 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:07.508 WARNING: Some requested NVMe devices were skipped 00:18:07.508 No valid NVMe controllers or AIO or URING devices found 00:18:07.508 04:13:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:07.508 04:13:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:17.489 Initializing NVMe Controllers 00:18:17.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:17.489 Initialization complete. Launching workers. 00:18:17.489 ======================================================== 00:18:17.489 Latency(us) 00:18:17.489 Device Information : IOPS MiB/s Average min max 00:18:17.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1303.79 162.97 24563.29 5363.98 63941.74 00:18:17.489 ======================================================== 00:18:17.489 Total : 1303.79 162.97 24563.29 5363.98 63941.74 00:18:17.489 00:18:17.489 04:14:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:17.490 04:14:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:17.490 04:14:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:17.490 Initializing NVMe Controllers 00:18:17.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.490 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:17.490 WARNING: Some requested NVMe devices were skipped 00:18:17.490 No valid NVMe controllers or AIO or URING devices found 00:18:17.490 04:14:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:17.490 04:14:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:27.465 Initializing NVMe Controllers 00:18:27.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:27.465 Controller IO queue size 128, less than required. 00:18:27.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:27.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:27.465 Initialization complete. Launching workers. 00:18:27.465 ======================================================== 00:18:27.465 Latency(us) 00:18:27.465 Device Information : IOPS MiB/s Average min max 00:18:27.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4189.29 523.66 30574.68 12560.93 66120.49 00:18:27.465 ======================================================== 00:18:27.465 Total : 4189.29 523.66 30574.68 12560.93 66120.49 00:18:27.465 00:18:27.465 04:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.465 04:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 18957a28-b881-40c0-862f-5b1bd8e4076a 00:18:27.465 04:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:27.724 04:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f6d7b255-c037-4ba1-bd01-db7815b5a337 00:18:27.981 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.239 rmmod nvme_tcp 00:18:28.239 rmmod nvme_fabrics 00:18:28.239 rmmod nvme_keyring 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 90541 ']' 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 90541 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 90541 ']' 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 90541 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90541 00:18:28.239 killing process with pid 90541 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:28.239 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90541' 00:18:28.240 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 90541 00:18:28.240 04:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 90541 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:30.138 00:18:30.138 real 0m50.136s 00:18:30.138 user 3m9.086s 00:18:30.138 sys 0m11.860s 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:30.138 ************************************ 00:18:30.138 END TEST nvmf_perf 00:18:30.138 ************************************ 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.138 ************************************ 00:18:30.138 START TEST nvmf_fio_host 00:18:30.138 ************************************ 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:30.138 * Looking for test storage... 00:18:30.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.138 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:30.139 Cannot find device "nvmf_tgt_br" 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:30.139 Cannot find device "nvmf_tgt_br2" 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:30.139 Cannot find device "nvmf_tgt_br" 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:30.139 Cannot find device "nvmf_tgt_br2" 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:18:30.139 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:30.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:30.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:30.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:18:30.398 00:18:30.398 --- 10.0.0.2 ping statistics --- 00:18:30.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.398 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:30.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:30.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:30.398 00:18:30.398 --- 10.0.0.3 ping statistics --- 00:18:30.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.398 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:30.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:30.398 00:18:30.398 --- 10.0.0.1 ping statistics --- 00:18:30.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.398 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:30.398 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=91337 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 91337 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 91337 ']' 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.657 04:14:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.657 [2024-07-23 04:14:23.790015] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:18:30.657 [2024-07-23 04:14:23.790256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.657 [2024-07-23 04:14:23.908267] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:30.657 [2024-07-23 04:14:23.926851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:30.657 [2024-07-23 04:14:23.997564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.657 [2024-07-23 04:14:23.997903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.657 [2024-07-23 04:14:23.998079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.657 [2024-07-23 04:14:23.998261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.657 [2024-07-23 04:14:23.998312] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.657 [2024-07-23 04:14:23.998705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.657 [2024-07-23 04:14:23.998883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.657 [2024-07-23 04:14:23.999027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:30.657 [2024-07-23 04:14:23.999035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.916 [2024-07-23 04:14:24.057636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:31.482 04:14:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.482 04:14:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:18:31.482 04:14:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:31.740 [2024-07-23 04:14:24.982936] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.740 04:14:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:31.740 04:14:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.740 04:14:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.740 04:14:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:31.999 Malloc1 00:18:32.258 04:14:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:32.516 04:14:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:32.516 04:14:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.774 [2024-07-23 04:14:26.047889] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.774 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:33.032 04:14:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:33.291 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:33.291 fio-3.35 00:18:33.291 Starting 1 thread 00:18:35.839 00:18:35.839 test: (groupid=0, jobs=1): err= 0: pid=91419: Tue Jul 23 04:14:28 2024 00:18:35.839 read: IOPS=9897, BW=38.7MiB/s (40.5MB/s)(77.6MiB/2006msec) 00:18:35.839 slat (nsec): min=1681, max=350531, avg=2253.53, stdev=3398.48 00:18:35.839 clat (usec): min=2805, max=11955, avg=6719.39, stdev=484.11 00:18:35.839 lat (usec): min=2821, max=11957, avg=6721.64, stdev=483.82 00:18:35.839 clat percentiles (usec): 00:18:35.839 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:18:35.839 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:18:35.839 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:18:35.839 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[10159], 99.95th=[11207], 00:18:35.839 | 99.99th=[11863] 00:18:35.839 bw ( KiB/s): min=38944, max=39872, per=100.00%, avg=39602.00, stdev=442.85, samples=4 00:18:35.839 iops : min= 9736, max= 9968, avg=9900.50, stdev=110.71, samples=4 00:18:35.839 write: IOPS=9919, BW=38.7MiB/s (40.6MB/s)(77.7MiB/2006msec); 0 zone resets 00:18:35.839 slat (nsec): min=1759, max=347885, avg=2357.23, stdev=2940.69 00:18:35.839 clat (usec): min=2610, max=11937, avg=6150.06, stdev=476.89 00:18:35.839 lat (usec): min=2625, max=11939, avg=6152.41, stdev=476.77 00:18:35.839 clat percentiles (usec): 00:18:35.839 | 1.00th=[ 5276], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5800], 00:18:35.839 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6194], 00:18:35.839 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6587], 95.00th=[ 6783], 00:18:35.839 | 99.00th=[ 7373], 99.50th=[ 8848], 99.90th=[10028], 99.95th=[11469], 00:18:35.839 | 99.99th=[11863] 00:18:35.839 bw ( KiB/s): min=39360, max=40480, per=99.93%, avg=39650.00, stdev=553.50, samples=4 00:18:35.839 iops : min= 9840, max=10120, avg=9912.50, stdev=138.38, samples=4 00:18:35.839 lat (msec) : 4=0.16%, 10=99.72%, 20=0.12% 00:18:35.839 cpu : usr=65.89%, sys=25.64%, ctx=509, majf=0, minf=8 00:18:35.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:35.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.839 issued rwts: total=19854,19899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.839 00:18:35.839 Run status group 0 (all jobs): 00:18:35.839 READ: bw=38.7MiB/s (40.5MB/s), 38.7MiB/s-38.7MiB/s (40.5MB/s-40.5MB/s), io=77.6MiB (81.3MB), run=2006-2006msec 00:18:35.839 WRITE: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=77.7MiB (81.5MB), run=2006-2006msec 00:18:35.839 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:35.839 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:35.839 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:35.839 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:35.839 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:35.840 04:14:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:35.840 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:35.840 fio-3.35 00:18:35.840 Starting 1 thread 00:18:38.370 00:18:38.370 test: (groupid=0, jobs=1): err= 0: pid=91469: Tue Jul 23 04:14:31 2024 00:18:38.370 read: IOPS=8930, BW=140MiB/s (146MB/s)(280MiB/2008msec) 00:18:38.370 slat (usec): min=2, max=128, avg= 3.52, stdev= 2.46 00:18:38.370 clat (usec): min=1675, max=16257, avg=8020.47, stdev=2539.06 00:18:38.370 lat (usec): min=1679, max=16259, avg=8024.00, stdev=2539.17 00:18:38.370 clat percentiles (usec): 00:18:38.370 | 1.00th=[ 3654], 5.00th=[ 4490], 10.00th=[ 5080], 20.00th=[ 5800], 00:18:38.370 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 7635], 60.00th=[ 8291], 00:18:38.370 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[11469], 95.00th=[12911], 00:18:38.370 | 99.00th=[15270], 99.50th=[15795], 99.90th=[16057], 99.95th=[16188], 00:18:38.370 | 99.99th=[16188] 00:18:38.370 bw ( KiB/s): min=67232, max=77376, per=49.48%, avg=70704.00, stdev=4555.14, samples=4 00:18:38.370 iops : min= 4202, max= 4836, avg=4419.00, stdev=284.70, samples=4 00:18:38.370 write: IOPS=5057, BW=79.0MiB/s (82.9MB/s)(145MiB/1832msec); 0 zone resets 00:18:38.370 slat (usec): min=28, max=399, avg=35.34, stdev=10.66 00:18:38.370 clat (usec): min=3842, max=20260, avg=11404.31, stdev=2171.44 00:18:38.370 lat (usec): min=3886, max=20307, avg=11439.66, stdev=2174.03 00:18:38.370 clat percentiles (usec): 00:18:38.370 | 1.00th=[ 7570], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9634], 00:18:38.370 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:18:38.370 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14353], 95.00th=[15401], 00:18:38.370 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:18:38.370 | 99.99th=[20317] 00:18:38.370 bw ( KiB/s): min=70816, max=77664, per=90.93%, avg=73584.00, stdev=2900.73, samples=4 00:18:38.370 iops : min= 4426, max= 4854, avg=4599.00, stdev=181.30, samples=4 00:18:38.370 lat (msec) : 2=0.01%, 4=1.25%, 10=59.74%, 20=38.99%, 50=0.01% 00:18:38.370 cpu : usr=75.34%, sys=18.93%, ctx=2, majf=0, minf=4 00:18:38.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:38.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:38.370 issued rwts: total=17933,9266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:38.370 00:18:38.370 Run status group 0 (all jobs): 00:18:38.370 READ: bw=140MiB/s (146MB/s), 140MiB/s-140MiB/s (146MB/s-146MB/s), io=280MiB (294MB), run=2008-2008msec 00:18:38.370 WRITE: bw=79.0MiB/s (82.9MB/s), 79.0MiB/s-79.0MiB/s (82.9MB/s-82.9MB/s), io=145MiB (152MB), run=1832-1832msec 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:38.370 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:18:38.629 Nvme0n1 00:18:38.629 04:14:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:38.887 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=304f1cf3-936d-4a0b-8274-ed9a55549658 00:18:38.887 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 304f1cf3-936d-4a0b-8274-ed9a55549658 00:18:38.887 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=304f1cf3-936d-4a0b-8274-ed9a55549658 00:18:38.887 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:38.887 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:38.887 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:38.887 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:39.146 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:39.146 { 00:18:39.146 "uuid": "304f1cf3-936d-4a0b-8274-ed9a55549658", 00:18:39.146 "name": "lvs_0", 00:18:39.146 "base_bdev": "Nvme0n1", 00:18:39.146 "total_data_clusters": 4, 00:18:39.146 "free_clusters": 4, 00:18:39.146 "block_size": 4096, 00:18:39.146 "cluster_size": 1073741824 00:18:39.146 } 00:18:39.146 ]' 00:18:39.146 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="304f1cf3-936d-4a0b-8274-ed9a55549658") .free_clusters' 00:18:39.146 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:18:39.146 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="304f1cf3-936d-4a0b-8274-ed9a55549658") .cluster_size' 00:18:39.146 4096 00:18:39.146 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:18:39.146 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:18:39.146 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:18:39.146 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:39.404 1e9cd6e5-e2ca-4144-a723-cda697a5509b 00:18:39.404 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:39.662 04:14:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:39.921 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:40.179 04:14:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:40.437 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:40.437 fio-3.35 00:18:40.437 Starting 1 thread 00:18:42.967 00:18:42.967 test: (groupid=0, jobs=1): err= 0: pid=91571: Tue Jul 23 04:14:35 2024 00:18:42.967 read: IOPS=6390, BW=25.0MiB/s (26.2MB/s)(50.1MiB/2009msec) 00:18:42.967 slat (nsec): min=1735, max=233663, avg=2600.57, stdev=3497.50 00:18:42.967 clat (usec): min=2827, max=18201, avg=10459.50, stdev=851.11 00:18:42.967 lat (usec): min=2832, max=18204, avg=10462.10, stdev=850.95 00:18:42.967 clat percentiles (usec): 00:18:42.967 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:18:42.967 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:18:42.967 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:18:42.967 | 99.00th=[12387], 99.50th=[12649], 99.90th=[16319], 99.95th=[17171], 00:18:42.967 | 99.99th=[17433] 00:18:42.967 bw ( KiB/s): min=24552, max=26208, per=99.95%, avg=25548.00, stdev=712.57, samples=4 00:18:42.967 iops : min= 6138, max= 6552, avg=6387.00, stdev=178.14, samples=4 00:18:42.967 write: IOPS=6391, BW=25.0MiB/s (26.2MB/s)(50.2MiB/2009msec); 0 zone resets 00:18:42.967 slat (nsec): min=1787, max=140433, avg=2734.88, stdev=2428.65 00:18:42.967 clat (usec): min=1798, max=17304, avg=9490.31, stdev=819.97 00:18:42.967 lat (usec): min=1806, max=17306, avg=9493.05, stdev=819.89 00:18:42.967 clat percentiles (usec): 00:18:42.967 | 1.00th=[ 7767], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:18:42.967 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:18:42.967 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10683], 00:18:42.967 | 99.00th=[11338], 99.50th=[11600], 99.90th=[16319], 99.95th=[17171], 00:18:42.967 | 99.99th=[17171] 00:18:42.968 bw ( KiB/s): min=25344, max=25736, per=99.96%, avg=25554.00, stdev=207.22, samples=4 00:18:42.968 iops : min= 6336, max= 6434, avg=6388.50, stdev=51.80, samples=4 00:18:42.968 lat (msec) : 2=0.01%, 4=0.06%, 10=52.11%, 20=47.82% 00:18:42.968 cpu : usr=70.77%, sys=23.21%, ctx=519, majf=0, minf=8 00:18:42.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:42.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:42.968 issued rwts: total=12838,12840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:42.968 00:18:42.968 Run status group 0 (all jobs): 00:18:42.968 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.1MiB (52.6MB), run=2009-2009msec 00:18:42.968 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.2MiB (52.6MB), run=2009-2009msec 00:18:42.968 04:14:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:42.968 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:43.240 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=42164867-c900-47d6-b65a-26ca4e7bcf22 00:18:43.240 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 42164867-c900-47d6-b65a-26ca4e7bcf22 00:18:43.240 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=42164867-c900-47d6-b65a-26ca4e7bcf22 00:18:43.240 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:43.240 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:43.240 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:43.240 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:43.511 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:43.511 { 00:18:43.511 "uuid": "304f1cf3-936d-4a0b-8274-ed9a55549658", 00:18:43.511 "name": "lvs_0", 00:18:43.511 "base_bdev": "Nvme0n1", 00:18:43.511 "total_data_clusters": 4, 00:18:43.511 "free_clusters": 0, 00:18:43.511 "block_size": 4096, 00:18:43.511 "cluster_size": 1073741824 00:18:43.511 }, 00:18:43.511 { 00:18:43.511 "uuid": "42164867-c900-47d6-b65a-26ca4e7bcf22", 00:18:43.511 "name": "lvs_n_0", 00:18:43.511 "base_bdev": "1e9cd6e5-e2ca-4144-a723-cda697a5509b", 00:18:43.511 "total_data_clusters": 1022, 00:18:43.511 "free_clusters": 1022, 00:18:43.511 "block_size": 4096, 00:18:43.511 "cluster_size": 4194304 00:18:43.511 } 00:18:43.511 ]' 00:18:43.511 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="42164867-c900-47d6-b65a-26ca4e7bcf22") .free_clusters' 00:18:43.511 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:18:43.511 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="42164867-c900-47d6-b65a-26ca4e7bcf22") .cluster_size' 00:18:43.511 4088 00:18:43.511 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:43.511 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:18:43.511 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:18:43.511 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:43.770 dd862762-7e9d-4b2c-9e3a-f32fce81735e 00:18:43.770 04:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:43.770 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:44.029 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:44.287 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:44.287 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:44.287 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:44.287 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:44.287 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:44.287 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:44.287 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:44.288 04:14:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:44.546 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:44.546 fio-3.35 00:18:44.546 Starting 1 thread 00:18:47.077 00:18:47.077 test: (groupid=0, jobs=1): err= 0: pid=91645: Tue Jul 23 04:14:39 2024 00:18:47.077 read: IOPS=5860, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec) 00:18:47.077 slat (nsec): min=1906, max=286779, avg=2685.41, stdev=3902.99 00:18:47.077 clat (usec): min=3163, max=20102, avg=11427.71, stdev=955.47 00:18:47.077 lat (usec): min=3171, max=20105, avg=11430.40, stdev=955.18 00:18:47.077 clat percentiles (usec): 00:18:47.077 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:18:47.077 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:18:47.077 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:18:47.077 | 99.00th=[13698], 99.50th=[14091], 99.90th=[17433], 99.95th=[19006], 00:18:47.077 | 99.99th=[20055] 00:18:47.077 bw ( KiB/s): min=22448, max=23928, per=99.93%, avg=23424.00, stdev=680.36, samples=4 00:18:47.077 iops : min= 5612, max= 5982, avg=5856.00, stdev=170.09, samples=4 00:18:47.077 write: IOPS=5851, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2009msec); 0 zone resets 00:18:47.077 slat (usec): min=2, max=207, avg= 2.81, stdev= 2.81 00:18:47.077 clat (usec): min=2126, max=19197, avg=10344.36, stdev=907.22 00:18:47.077 lat (usec): min=2138, max=19199, avg=10347.17, stdev=907.08 00:18:47.077 clat percentiles (usec): 00:18:47.077 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:18:47.077 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:18:47.077 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:18:47.077 | 99.00th=[12387], 99.50th=[12780], 99.90th=[17171], 99.95th=[17695], 00:18:47.077 | 99.99th=[19006] 00:18:47.077 bw ( KiB/s): min=23296, max=23488, per=99.89%, avg=23378.00, stdev=93.84, samples=4 00:18:47.077 iops : min= 5824, max= 5872, avg=5844.50, stdev=23.46, samples=4 00:18:47.077 lat (msec) : 4=0.06%, 10=19.64%, 20=80.29%, 50=0.02% 00:18:47.077 cpu : usr=73.75%, sys=20.92%, ctx=5, majf=0, minf=8 00:18:47.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:47.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.077 issued rwts: total=11773,11755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.077 00:18:47.077 Run status group 0 (all jobs): 00:18:47.077 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.2MB), run=2009-2009msec 00:18:47.077 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.1MB), run=2009-2009msec 00:18:47.077 04:14:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:47.077 04:14:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:18:47.077 04:14:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:18:47.077 04:14:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:47.336 04:14:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:18:47.594 04:14:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:47.853 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.788 rmmod nvme_tcp 00:18:48.788 rmmod nvme_fabrics 00:18:48.788 rmmod nvme_keyring 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 91337 ']' 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 91337 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 91337 ']' 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 91337 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91337 00:18:48.788 killing process with pid 91337 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91337' 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 91337 00:18:48.788 04:14:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 91337 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:49.048 ************************************ 00:18:49.048 END TEST nvmf_fio_host 00:18:49.048 ************************************ 00:18:49.048 00:18:49.048 real 0m18.885s 00:18:49.048 user 1m22.229s 00:18:49.048 sys 0m4.796s 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.048 ************************************ 00:18:49.048 START TEST nvmf_failover 00:18:49.048 ************************************ 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:49.048 * Looking for test storage... 00:18:49.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:49.048 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:49.049 Cannot find device "nvmf_tgt_br" 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.049 Cannot find device "nvmf_tgt_br2" 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:49.049 Cannot find device "nvmf_tgt_br" 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:49.049 Cannot find device "nvmf_tgt_br2" 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:18:49.049 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:49.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:49.308 00:18:49.308 --- 10.0.0.2 ping statistics --- 00:18:49.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.308 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:49.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:18:49.308 00:18:49.308 --- 10.0.0.3 ping statistics --- 00:18:49.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.308 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:49.308 00:18:49.308 --- 10.0.0.1 ping statistics --- 00:18:49.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.308 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:49.308 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=91876 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 91876 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 91876 ']' 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.567 04:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:49.567 [2024-07-23 04:14:42.728852] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:18:49.567 [2024-07-23 04:14:42.728958] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.567 [2024-07-23 04:14:42.854329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:49.567 [2024-07-23 04:14:42.868136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:49.826 [2024-07-23 04:14:42.930222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.826 [2024-07-23 04:14:42.930561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.826 [2024-07-23 04:14:42.930772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.826 [2024-07-23 04:14:42.930884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.826 [2024-07-23 04:14:42.931097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.826 [2024-07-23 04:14:42.931494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.826 [2024-07-23 04:14:42.931641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.826 [2024-07-23 04:14:42.931644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.826 [2024-07-23 04:14:42.986817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:50.394 04:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.394 04:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:50.394 04:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.394 04:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.394 04:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:50.394 04:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.394 04:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:50.653 [2024-07-23 04:14:43.924874] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.653 04:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:50.911 Malloc0 00:18:50.911 04:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.170 04:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.429 04:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.687 [2024-07-23 04:14:44.855794] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.687 04:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:51.969 [2024-07-23 04:14:45.055898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:51.969 [2024-07-23 04:14:45.260069] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=91934 00:18:51.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 91934 /var/tmp/bdevperf.sock 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 91934 ']' 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.969 04:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:52.902 04:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.902 04:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:52.902 04:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.468 NVMe0n1 00:18:53.468 04:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.727 00:18:53.727 04:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=91957 00:18:53.727 04:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:53.727 04:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:54.660 04:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.918 [2024-07-23 04:14:48.080112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe054f0 is same with the state(5) to be set 00:18:54.918 [2024-07-23 04:14:48.080155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe054f0 is same with the state(5) to be set 00:18:54.918 04:14:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:58.210 04:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:58.210 00:18:58.210 04:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:58.469 04:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:01.798 04:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.798 [2024-07-23 04:14:54.914112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.798 04:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:02.731 04:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:02.989 04:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 91957 00:19:09.565 0 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 91934 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 91934 ']' 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 91934 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91934 00:19:09.565 killing process with pid 91934 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91934' 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 91934 00:19:09.565 04:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 91934 00:19:09.565 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:09.566 [2024-07-23 04:14:45.321379] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:19:09.566 [2024-07-23 04:14:45.321475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91934 ] 00:19:09.566 [2024-07-23 04:14:45.438132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:09.566 [2024-07-23 04:14:45.458787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.566 [2024-07-23 04:14:45.535828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.566 [2024-07-23 04:14:45.591959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:09.566 Running I/O for 15 seconds... 00:19:09.566 [2024-07-23 04:14:48.080371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.080526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.080551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.080593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.080620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.080646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.080672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.080699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.080746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.080984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.080998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.081010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.081036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.081078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.081131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.081161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.081190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.081218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.566 [2024-07-23 04:14:48.081247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.566 [2024-07-23 04:14:48.081541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.566 [2024-07-23 04:14:48.081556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.081569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.081595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.081622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.081648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.081689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.081715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.081741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.081766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.081792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.081818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.081843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.081874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.081923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.081957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.081972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.081996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.567 [2024-07-23 04:14:48.082584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.082610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.082637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.082669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.082696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.567 [2024-07-23 04:14:48.082710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.567 [2024-07-23 04:14:48.082722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.082748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.082775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.082801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.082831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.082858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.082889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.082932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.082971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.082985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.568 [2024-07-23 04:14:48.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.568 [2024-07-23 04:14:48.083801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235f0d0 is same with the state(5) to be set 00:19:09.568 [2024-07-23 04:14:48.083829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.568 [2024-07-23 04:14:48.083839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.568 [2024-07-23 04:14:48.083854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91848 len:8 PRP1 0x0 PRP2 0x0 00:19:09.568 [2024-07-23 04:14:48.083866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.568 [2024-07-23 04:14:48.083880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.568 [2024-07-23 04:14:48.083889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.568 [2024-07-23 04:14:48.083898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92352 len:8 PRP1 0x0 PRP2 0x0 00:19:09.568 [2024-07-23 04:14:48.083937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.083952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.083961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.083971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92360 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.083983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.083996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92368 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92376 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92384 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92392 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92400 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92408 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92416 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92424 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92432 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92440 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92448 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92456 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.569 [2024-07-23 04:14:48.084562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.569 [2024-07-23 04:14:48.084576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92464 len:8 PRP1 0x0 PRP2 0x0 00:19:09.569 [2024-07-23 04:14:48.084588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084640] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x235f0d0 was disconnected and freed. reset controller. 00:19:09.569 [2024-07-23 04:14:48.084656] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:09.569 [2024-07-23 04:14:48.084706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.569 [2024-07-23 04:14:48.084725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.569 [2024-07-23 04:14:48.084752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.569 [2024-07-23 04:14:48.084776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.569 [2024-07-23 04:14:48.084800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:48.084812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.569 [2024-07-23 04:14:48.084864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2306360 (9): Bad file descriptor 00:19:09.569 [2024-07-23 04:14:48.088532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.569 [2024-07-23 04:14:48.127689] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:09.569 [2024-07-23 04:14:51.657920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.569 [2024-07-23 04:14:51.657980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:51.658024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.569 [2024-07-23 04:14:51.658040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:51.658065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.569 [2024-07-23 04:14:51.658078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:51.658093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.569 [2024-07-23 04:14:51.658105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:51.658140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.569 [2024-07-23 04:14:51.658154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:51.658168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.569 [2024-07-23 04:14:51.658181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:51.658195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.569 [2024-07-23 04:14:51.658207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:51.658221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.569 [2024-07-23 04:14:51.658234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.569 [2024-07-23 04:14:51.658248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.569 [2024-07-23 04:14:51.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.658287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.658313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.658339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.570 [2024-07-23 04:14:51.658829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.658855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.658882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.658919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.658981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.658996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.659021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.659037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.659050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.570 [2024-07-23 04:14:51.659064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.570 [2024-07-23 04:14:51.659076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.571 [2024-07-23 04:14:51.659830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.659974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.659989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.660017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.660044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.660071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.660099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.660126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.660153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.660180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.571 [2024-07-23 04:14:51.660207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.571 [2024-07-23 04:14:51.660220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.660844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.660978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.660990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.661017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.661050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.661077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.661104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.661130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.661157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.661184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.572 [2024-07-23 04:14:51.661210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.661242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.661268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.661295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.572 [2024-07-23 04:14:51.661328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.572 [2024-07-23 04:14:51.661342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:51.661626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2381d80 is same with the state(5) to be set 00:19:09.573 [2024-07-23 04:14:51.661653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.573 [2024-07-23 04:14:51.661667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.573 [2024-07-23 04:14:51.661678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5784 len:8 PRP1 0x0 PRP2 0x0 00:19:09.573 [2024-07-23 04:14:51.661689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661742] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2381d80 was disconnected and freed. reset controller. 00:19:09.573 [2024-07-23 04:14:51.661758] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:19:09.573 [2024-07-23 04:14:51.661812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.573 [2024-07-23 04:14:51.661841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.573 [2024-07-23 04:14:51.661868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.573 [2024-07-23 04:14:51.661903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.573 [2024-07-23 04:14:51.661931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:51.661943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.573 [2024-07-23 04:14:51.661987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2306360 (9): Bad file descriptor 00:19:09.573 [2024-07-23 04:14:51.665409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.573 [2024-07-23 04:14:51.700605] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:09.573 [2024-07-23 04:14:56.178853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.178920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.178967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.178981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.178996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.179039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.179068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.179093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.179119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.179146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.179190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.179217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.573 [2024-07-23 04:14:56.179243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.573 [2024-07-23 04:14:56.179525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.573 [2024-07-23 04:14:56.179555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.179582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.179608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.179634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.179660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.179687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.179979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.179993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.180110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.180137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.180163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.180189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.180216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.180250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.180277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.574 [2024-07-23 04:14:56.180303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.574 [2024-07-23 04:14:56.180541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.574 [2024-07-23 04:14:56.180554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.180574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.180602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.180627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.180654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.180680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.180707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.180734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.180983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.180996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.181208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.181235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.181262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.181294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.181321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.181348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.181374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.575 [2024-07-23 04:14:56.181400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.575 [2024-07-23 04:14:56.181533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.575 [2024-07-23 04:14:56.181547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.181967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.181979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.182013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.182040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.576 [2024-07-23 04:14:56.182067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2381d80 is same with the state(5) to be set 00:19:09.576 [2024-07-23 04:14:56.182097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129672 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130064 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130072 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130080 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130088 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130096 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130104 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130112 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130120 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130128 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130136 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130144 len:8 PRP1 0x0 PRP2 0x0 00:19:09.576 [2024-07-23 04:14:56.182637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.576 [2024-07-23 04:14:56.182649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.576 [2024-07-23 04:14:56.182658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.576 [2024-07-23 04:14:56.182667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130152 len:8 PRP1 0x0 PRP2 0x0 00:19:09.577 [2024-07-23 04:14:56.182679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.182690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.577 [2024-07-23 04:14:56.182704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.577 [2024-07-23 04:14:56.182714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130160 len:8 PRP1 0x0 PRP2 0x0 00:19:09.577 [2024-07-23 04:14:56.182726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.182739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.577 [2024-07-23 04:14:56.182748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.577 [2024-07-23 04:14:56.182757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130168 len:8 PRP1 0x0 PRP2 0x0 00:19:09.577 [2024-07-23 04:14:56.182769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.182780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.577 [2024-07-23 04:14:56.182790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.577 [2024-07-23 04:14:56.182799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130176 len:8 PRP1 0x0 PRP2 0x0 00:19:09.577 [2024-07-23 04:14:56.182811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.182823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.577 [2024-07-23 04:14:56.182836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.577 [2024-07-23 04:14:56.182846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130184 len:8 PRP1 0x0 PRP2 0x0 00:19:09.577 [2024-07-23 04:14:56.182858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.182911] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2381d80 was disconnected and freed. reset controller. 00:19:09.577 [2024-07-23 04:14:56.182939] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:19:09.577 [2024-07-23 04:14:56.182993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.577 [2024-07-23 04:14:56.183023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.183038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.577 [2024-07-23 04:14:56.183050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.183063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.577 [2024-07-23 04:14:56.183075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.183088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.577 [2024-07-23 04:14:56.183100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.577 [2024-07-23 04:14:56.183112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.577 [2024-07-23 04:14:56.186546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.577 [2024-07-23 04:14:56.186581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2306360 (9): Bad file descriptor 00:19:09.577 [2024-07-23 04:14:56.216975] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:09.577 00:19:09.577 Latency(us) 00:19:09.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.577 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:09.577 Verification LBA range: start 0x0 length 0x4000 00:19:09.577 NVMe0n1 : 15.01 10294.83 40.21 234.86 0.00 12128.32 528.76 14060.45 00:19:09.577 =================================================================================================================== 00:19:09.577 Total : 10294.83 40.21 234.86 0.00 12128.32 528.76 14060.45 00:19:09.577 Received shutdown signal, test time was about 15.000000 seconds 00:19:09.577 00:19:09.577 Latency(us) 00:19:09.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.577 =================================================================================================================== 00:19:09.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:09.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=92130 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 92130 /var/tmp/bdevperf.sock 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 92130 ']' 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:09.577 [2024-07-23 04:15:02.797420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:09.577 04:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:09.836 [2024-07-23 04:15:03.053515] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:09.836 04:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:10.093 NVMe0n1 00:19:10.093 04:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:10.350 00:19:10.350 04:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:10.608 00:19:10.608 04:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:10.608 04:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:10.867 04:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:11.125 04:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:14.410 04:15:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:14.410 04:15:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:14.410 04:15:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=92200 00:19:14.410 04:15:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.410 04:15:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 92200 00:19:15.786 0 00:19:15.786 04:15:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:15.786 [2024-07-23 04:15:02.255947] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:19:15.786 [2024-07-23 04:15:02.256122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92130 ] 00:19:15.786 [2024-07-23 04:15:02.373195] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:15.786 [2024-07-23 04:15:02.386466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.786 [2024-07-23 04:15:02.448764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.786 [2024-07-23 04:15:02.501431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:15.786 [2024-07-23 04:15:04.293243] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:15.786 [2024-07-23 04:15:04.293360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.786 [2024-07-23 04:15:04.293382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.786 [2024-07-23 04:15:04.293399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.786 [2024-07-23 04:15:04.293411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.786 [2024-07-23 04:15:04.293423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.786 [2024-07-23 04:15:04.293435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.787 [2024-07-23 04:15:04.293447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.787 [2024-07-23 04:15:04.293458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.787 [2024-07-23 04:15:04.293470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.787 [2024-07-23 04:15:04.293514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.787 [2024-07-23 04:15:04.293541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5b360 (9): Bad file descriptor 00:19:15.787 [2024-07-23 04:15:04.303978] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:15.787 Running I/O for 1 seconds... 00:19:15.787 00:19:15.787 Latency(us) 00:19:15.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.787 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:15.787 Verification LBA range: start 0x0 length 0x4000 00:19:15.787 NVMe0n1 : 1.01 7949.59 31.05 0.00 0.00 16041.82 1884.16 14358.34 00:19:15.787 =================================================================================================================== 00:19:15.787 Total : 7949.59 31.05 0.00 0.00 16041.82 1884.16 14358.34 00:19:15.787 04:15:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:15.787 04:15:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:15.787 04:15:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:16.045 04:15:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:16.045 04:15:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:16.304 04:15:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:16.561 04:15:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 92130 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 92130 ']' 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 92130 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92130 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:19.864 killing process with pid 92130 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92130' 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 92130 00:19:19.864 04:15:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 92130 00:19:19.864 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:20.131 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.396 rmmod nvme_tcp 00:19:20.396 rmmod nvme_fabrics 00:19:20.396 rmmod nvme_keyring 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 91876 ']' 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 91876 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 91876 ']' 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 91876 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91876 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91876' 00:19:20.396 killing process with pid 91876 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 91876 00:19:20.396 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 91876 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:20.654 00:19:20.654 real 0m31.705s 00:19:20.654 user 2m2.278s 00:19:20.654 sys 0m5.428s 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.654 ************************************ 00:19:20.654 END TEST nvmf_failover 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:20.654 ************************************ 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.654 ************************************ 00:19:20.654 START TEST nvmf_host_discovery 00:19:20.654 ************************************ 00:19:20.654 04:15:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:20.913 * Looking for test storage... 00:19:20.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:19:20.913 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:20.914 Cannot find device "nvmf_tgt_br" 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.914 Cannot find device "nvmf_tgt_br2" 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:20.914 Cannot find device "nvmf_tgt_br" 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:20.914 Cannot find device "nvmf_tgt_br2" 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:20.914 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:21.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:19:21.173 00:19:21.173 --- 10.0.0.2 ping statistics --- 00:19:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.173 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:21.173 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:21.173 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:21.173 00:19:21.173 --- 10.0.0.3 ping statistics --- 00:19:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.173 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:21.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:21.173 00:19:21.173 --- 10.0.0.1 ping statistics --- 00:19:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.173 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=92463 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 92463 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 92463 ']' 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.173 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.173 [2024-07-23 04:15:14.454839] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:19:21.173 [2024-07-23 04:15:14.454888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.431 [2024-07-23 04:15:14.571117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:21.431 [2024-07-23 04:15:14.590343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.431 [2024-07-23 04:15:14.657637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.431 [2024-07-23 04:15:14.657703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.431 [2024-07-23 04:15:14.657717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.431 [2024-07-23 04:15:14.657727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.431 [2024-07-23 04:15:14.657736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.431 [2024-07-23 04:15:14.657775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.431 [2024-07-23 04:15:14.715523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:21.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:21.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:21.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.689 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.689 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.689 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.689 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.689 [2024-07-23 04:15:14.814549] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.689 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.689 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:21.689 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.689 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.689 [2024-07-23 04:15:14.822696] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.690 null0 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.690 null1 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.690 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=92488 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 92488 /tmp/host.sock 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 92488 ']' 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.690 04:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.690 [2024-07-23 04:15:14.909259] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:19:21.690 [2024-07-23 04:15:14.909348] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92488 ] 00:19:21.690 [2024-07-23 04:15:15.032572] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:21.948 [2024-07-23 04:15:15.045996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.948 [2024-07-23 04:15:15.111226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.948 [2024-07-23 04:15:15.160875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.515 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 04:15:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:22.774 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.033 [2024-07-23 04:15:16.218865] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:23.033 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.034 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.292 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:23.292 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:19:23.293 04:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:19:23.551 [2024-07-23 04:15:16.867292] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:23.551 [2024-07-23 04:15:16.867318] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:23.551 [2024-07-23 04:15:16.867373] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:23.551 [2024-07-23 04:15:16.873329] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:23.810 [2024-07-23 04:15:16.930188] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:23.810 [2024-07-23 04:15:16.930225] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.378 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.638 [2024-07-23 04:15:17.808401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:24.638 [2024-07-23 04:15:17.808748] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:24.638 [2024-07-23 04:15:17.808770] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:24.638 [2024-07-23 04:15:17.814759] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:24.638 [2024-07-23 04:15:17.873007] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:24.638 [2024-07-23 04:15:17.873025] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:24.638 [2024-07-23 04:15:17.873031] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:24.638 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.899 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:24.900 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:19:24.900 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:24.900 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.900 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:24.900 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.900 04:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.900 [2024-07-23 04:15:18.041545] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:24.900 [2024-07-23 04:15:18.041588] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:24.900 [2024-07-23 04:15:18.047559] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:24.900 [2024-07-23 04:15:18.047603] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:24.900 [2024-07-23 04:15:18.047683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.900 [2024-07-23 04:15:18.047707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.900 [2024-07-23 04:15:18.047719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.900 [2024-07-23 04:15:18.047727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.900 [2024-07-23 04:15:18.047736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.900 [2024-07-23 04:15:18.047743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.900 [2024-07-23 04:15:18.047752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.900 [2024-07-23 04:15:18.047760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.900 [2024-07-23 04:15:18.047767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abb870 is same with the state(5) to be set 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:24.900 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:25.167 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.168 04:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.544 [2024-07-23 04:15:19.467932] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:26.544 [2024-07-23 04:15:19.467963] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:26.544 [2024-07-23 04:15:19.467995] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:26.544 [2024-07-23 04:15:19.473958] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:26.544 [2024-07-23 04:15:19.533827] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:26.544 [2024-07-23 04:15:19.533880] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.544 request: 00:19:26.544 { 00:19:26.544 "name": "nvme", 00:19:26.544 "trtype": "tcp", 00:19:26.544 "traddr": "10.0.0.2", 00:19:26.544 "adrfam": "ipv4", 00:19:26.544 "trsvcid": "8009", 00:19:26.544 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:26.544 "wait_for_attach": true, 00:19:26.544 "method": "bdev_nvme_start_discovery", 00:19:26.544 "req_id": 1 00:19:26.544 } 00:19:26.544 Got JSON-RPC error response 00:19:26.544 response: 00:19:26.544 { 00:19:26.544 "code": -17, 00:19:26.544 "message": "File exists" 00:19:26.544 } 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:26.544 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.545 request: 00:19:26.545 { 00:19:26.545 "name": "nvme_second", 00:19:26.545 "trtype": "tcp", 00:19:26.545 "traddr": "10.0.0.2", 00:19:26.545 "adrfam": "ipv4", 00:19:26.545 "trsvcid": "8009", 00:19:26.545 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:26.545 "wait_for_attach": true, 00:19:26.545 "method": "bdev_nvme_start_discovery", 00:19:26.545 "req_id": 1 00:19:26.545 } 00:19:26.545 Got JSON-RPC error response 00:19:26.545 response: 00:19:26.545 { 00:19:26.545 "code": -17, 00:19:26.545 "message": "File exists" 00:19:26.545 } 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.545 04:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.480 [2024-07-23 04:15:20.807212] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.480 [2024-07-23 04:15:20.807268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad90a0 with addr=10.0.0.2, port=8010 00:19:27.480 [2024-07-23 04:15:20.807285] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:27.480 [2024-07-23 04:15:20.807293] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:27.480 [2024-07-23 04:15:20.807300] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:28.857 [2024-07-23 04:15:21.807199] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.857 [2024-07-23 04:15:21.807250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad90a0 with addr=10.0.0.2, port=8010 00:19:28.857 [2024-07-23 04:15:21.807265] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:28.857 [2024-07-23 04:15:21.807273] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:28.857 [2024-07-23 04:15:21.807280] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:29.793 [2024-07-23 04:15:22.807135] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:29.793 request: 00:19:29.793 { 00:19:29.793 "name": "nvme_second", 00:19:29.793 "trtype": "tcp", 00:19:29.793 "traddr": "10.0.0.2", 00:19:29.793 "adrfam": "ipv4", 00:19:29.793 "trsvcid": "8010", 00:19:29.793 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:29.793 "wait_for_attach": false, 00:19:29.793 "attach_timeout_ms": 3000, 00:19:29.793 "method": "bdev_nvme_start_discovery", 00:19:29.793 "req_id": 1 00:19:29.793 } 00:19:29.793 Got JSON-RPC error response 00:19:29.793 response: 00:19:29.793 { 00:19:29.793 "code": -110, 00:19:29.793 "message": "Connection timed out" 00:19:29.793 } 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 92488 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.793 rmmod nvme_tcp 00:19:29.793 rmmod nvme_fabrics 00:19:29.793 rmmod nvme_keyring 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 92463 ']' 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 92463 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 92463 ']' 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 92463 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.793 04:15:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92463 00:19:29.793 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:29.793 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:29.793 killing process with pid 92463 00:19:29.793 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92463' 00:19:29.793 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 92463 00:19:29.793 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 92463 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:30.052 00:19:30.052 real 0m9.283s 00:19:30.052 user 0m18.387s 00:19:30.052 sys 0m1.868s 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:30.052 ************************************ 00:19:30.052 END TEST nvmf_host_discovery 00:19:30.052 ************************************ 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.052 ************************************ 00:19:30.052 START TEST nvmf_host_multipath_status 00:19:30.052 ************************************ 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:30.052 * Looking for test storage... 00:19:30.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.052 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:30.311 Cannot find device "nvmf_tgt_br" 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.311 Cannot find device "nvmf_tgt_br2" 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:30.311 Cannot find device "nvmf_tgt_br" 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:30.311 Cannot find device "nvmf_tgt_br2" 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.311 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.312 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:30.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:19:30.570 00:19:30.570 --- 10.0.0.2 ping statistics --- 00:19:30.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.570 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:30.570 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:30.570 00:19:30.570 --- 10.0.0.3 ping statistics --- 00:19:30.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.570 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:30.570 00:19:30.570 --- 10.0.0.1 ping statistics --- 00:19:30.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.570 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=92948 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 92948 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 92948 ']' 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.570 04:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:30.570 [2024-07-23 04:15:23.795391] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:19:30.570 [2024-07-23 04:15:23.795496] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.828 [2024-07-23 04:15:23.918103] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:30.828 [2024-07-23 04:15:23.935180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:30.828 [2024-07-23 04:15:23.989450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.828 [2024-07-23 04:15:23.989525] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.828 [2024-07-23 04:15:23.989551] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.828 [2024-07-23 04:15:23.989558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.828 [2024-07-23 04:15:23.989565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.828 [2024-07-23 04:15:23.989680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.828 [2024-07-23 04:15:23.989692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.828 [2024-07-23 04:15:24.039452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:30.828 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.828 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:19:30.828 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.828 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:30.828 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:30.828 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.828 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=92948 00:19:30.828 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:31.086 [2024-07-23 04:15:24.390747] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.086 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:31.343 Malloc0 00:19:31.343 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:31.910 04:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.910 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.168 [2024-07-23 04:15:25.400286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.168 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:32.462 [2024-07-23 04:15:25.596320] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=92988 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 92988 /var/tmp/bdevperf.sock 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 92988 ']' 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.462 04:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:33.420 04:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.420 04:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:19:33.420 04:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:33.420 04:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:33.987 Nvme0n1 00:19:33.987 04:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:34.246 Nvme0n1 00:19:34.246 04:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:34.246 04:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:36.148 04:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:36.149 04:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:36.407 04:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:36.665 04:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:37.601 04:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:37.601 04:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:37.601 04:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.601 04:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:37.858 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.858 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:37.858 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.858 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:38.116 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:38.116 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:38.116 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:38.116 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.374 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.374 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:38.374 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.374 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:38.632 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.632 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:38.632 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.632 04:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:38.891 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.891 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:38.891 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.891 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:39.149 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.149 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:39.149 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:39.149 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:39.407 04:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:40.342 04:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:40.342 04:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:40.342 04:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.342 04:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:40.600 04:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:40.600 04:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:40.600 04:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.600 04:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:40.859 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.859 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:40.859 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:40.859 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.117 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:41.117 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:41.117 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.117 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:41.375 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:41.375 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:41.375 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.375 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:41.633 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:41.633 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:41.633 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.633 04:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:41.891 04:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:41.891 04:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:41.891 04:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:42.148 04:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:42.406 04:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:43.339 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:43.339 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:43.339 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.339 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:43.598 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.598 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:43.598 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.598 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:43.857 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:43.857 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:43.857 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.857 04:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:44.115 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.115 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:44.115 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:44.115 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:44.374 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.374 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:44.374 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:44.374 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:44.633 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.633 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:44.633 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:44.633 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:44.633 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.633 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:44.633 04:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:44.891 04:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:45.149 04:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:46.523 04:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:46.523 04:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:46.523 04:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.523 04:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:46.523 04:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.523 04:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:46.523 04:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:46.523 04:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.781 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:46.781 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:46.781 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.781 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:47.039 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.039 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:47.039 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:47.039 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.342 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.342 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:47.342 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.342 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:47.599 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.599 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:47.600 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.600 04:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:47.857 04:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:47.857 04:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:47.857 04:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:48.115 04:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:48.373 04:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:49.305 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:49.305 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:49.305 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.305 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:49.564 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:49.564 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:49.564 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.564 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:49.822 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:49.822 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:49.822 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.822 04:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:49.822 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.822 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:49.822 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.822 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:50.389 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.389 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:50.389 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.389 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:50.389 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:50.389 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:50.389 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.389 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:50.647 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:50.647 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:50.647 04:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:50.906 04:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:51.164 04:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:52.100 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:52.100 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:52.100 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.100 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:52.358 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:52.358 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:52.617 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.617 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:52.877 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.877 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:52.877 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.877 04:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.877 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.877 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.877 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.877 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:53.135 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.135 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:53.135 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.135 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:53.392 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:53.392 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:53.392 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.392 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.650 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.650 04:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:53.909 04:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:53.909 04:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:54.167 04:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:54.426 04:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:55.386 04:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:55.386 04:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:55.386 04:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.386 04:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:55.645 04:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.645 04:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:55.645 04:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.645 04:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:55.904 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.904 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:55.904 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.904 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:56.163 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.163 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:56.163 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.163 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:56.163 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.163 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:56.163 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.163 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:56.422 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.422 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:56.422 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.422 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:56.680 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.680 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:56.680 04:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:56.939 04:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:57.197 04:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:58.133 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:58.133 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:58.133 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:58.133 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.392 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:58.392 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:58.392 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.392 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:58.651 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.651 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:58.651 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.651 04:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:58.909 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.909 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:58.910 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.910 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:59.168 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.168 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:59.168 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.168 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:59.426 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.426 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:59.426 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:59.426 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.684 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.684 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:59.684 04:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:59.943 04:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:20:00.200 04:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:01.136 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:01.136 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:01.136 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:01.136 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.394 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.394 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:01.394 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:01.394 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.655 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.655 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:01.655 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.655 04:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:01.920 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.920 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:01.920 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.920 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:02.178 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.179 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:02.179 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.179 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:02.437 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.437 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:02.437 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:02.437 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.696 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.696 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:02.696 04:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:02.954 04:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:03.213 04:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:04.148 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:04.148 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:04.148 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.148 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:04.407 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.407 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:04.407 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.407 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:04.665 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:04.665 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:04.665 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:04.665 04:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.923 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.923 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:04.923 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.924 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:05.182 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.182 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:05.182 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.182 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:05.182 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.182 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:05.182 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.182 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 92988 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 92988 ']' 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 92988 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92988 00:20:05.440 killing process with pid 92988 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:05.440 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92988' 00:20:05.441 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 92988 00:20:05.441 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 92988 00:20:05.706 Connection closed with partial response: 00:20:05.706 00:20:05.706 00:20:05.706 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 92988 00:20:05.706 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:05.706 [2024-07-23 04:15:25.661496] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:20:05.706 [2024-07-23 04:15:25.661603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92988 ] 00:20:05.706 [2024-07-23 04:15:25.780581] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:05.706 [2024-07-23 04:15:25.791199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.706 [2024-07-23 04:15:25.861367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.706 [2024-07-23 04:15:25.913496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:05.706 Running I/O for 90 seconds... 00:20:05.706 [2024-07-23 04:15:41.231160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.706 [2024-07-23 04:15:41.231230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:05.706 [2024-07-23 04:15:41.231302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.706 [2024-07-23 04:15:41.231322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:05.706 [2024-07-23 04:15:41.231343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.706 [2024-07-23 04:15:41.231372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:05.706 [2024-07-23 04:15:41.231392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.706 [2024-07-23 04:15:41.231405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.231969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.231990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:05.707 [2024-07-23 04:15:41.232767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.707 [2024-07-23 04:15:41.232784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.232804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.232819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.232838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.232852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.232870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.232884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.232903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.232930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.232952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.232966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.232985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.233502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.233966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.233989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.234004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.234041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.708 [2024-07-23 04:15:41.234078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.234115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.234701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.234746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.234785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.234823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.234862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.708 [2024-07-23 04:15:41.234900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.708 [2024-07-23 04:15:41.234940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.709 [2024-07-23 04:15:41.234958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.234983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.709 [2024-07-23 04:15:41.234997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.709 [2024-07-23 04:15:41.235067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.709 [2024-07-23 04:15:41.235466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.235952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.235971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.709 [2024-07-23 04:15:41.236570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.709 [2024-07-23 04:15:41.236583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.236960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.236975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.237000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.237016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.237041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.237056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.237081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.237095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:41.237121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.710 [2024-07-23 04:15:41.237135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.289958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.289994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.290008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:05.710 [2024-07-23 04:15:56.290027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.710 [2024-07-23 04:15:56.290041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.290076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.290090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.290110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.290124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.290144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.290158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.290187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.290209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.290228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.290242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.290263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.290277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.290297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.290312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:05.711 [2024-07-23 04:15:56.293858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.711 [2024-07-23 04:15:56.293872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.293891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.293905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.293938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.293956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.293977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.293991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.712 [2024-07-23 04:15:56.294734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.294768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.294802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.294836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.294869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.294917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.294953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.294973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.294986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.295005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.295019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.295069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.295084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.295104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.295126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.295148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.295162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:05.712 [2024-07-23 04:15:56.295182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.712 [2024-07-23 04:15:56.295196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.295215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.295229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.295250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.295264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.295284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.295298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.295318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.295333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.298968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.713 [2024-07-23 04:15:56.298987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.299010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.299036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.299062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.299077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.299099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.299114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.299135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.299150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.299171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.299186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.299206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.299221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.299247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.299266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.299982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.300009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.300036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.300052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.713 [2024-07-23 04:15:56.300073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.713 [2024-07-23 04:15:56.300088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.300640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.300673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.300706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.300738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.300772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.300804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.300844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.300878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.300963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.300981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.301017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.301053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.301088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.301123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.301158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.714 [2024-07-23 04:15:56.301194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.301266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.301300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.301348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.301391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.301424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.301456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.301490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.714 [2024-07-23 04:15:56.301522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.714 [2024-07-23 04:15:56.301541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.301555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.301588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.301620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.301653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.301685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.301718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.301750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.301790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.301823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.301855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.301875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.301888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.303806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.303833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.303859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.303874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.303894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.303924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.303945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.303959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.303994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.715 [2024-07-23 04:15:56.304702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:05.715 [2024-07-23 04:15:56.304754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.715 [2024-07-23 04:15:56.304768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.306972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.307901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.307965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.307999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.308020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.308035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.308056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.716 [2024-07-23 04:15:56.308070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.308090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.308104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.308124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.308138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.308158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.308172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:05.716 [2024-07-23 04:15:56.308193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.716 [2024-07-23 04:15:56.308232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.308300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.308348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.308380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.308541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.308594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.308867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.308899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.308950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.308986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.309002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.309022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.309036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.309057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.309081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.309101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.309123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.309145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.309160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.311168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.311225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.311261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.311460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.311506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.717 [2024-07-23 04:15:56.311543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.311890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.311948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.311971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.311986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.312008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.312023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.312044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.312059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:05.717 [2024-07-23 04:15:56.312095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.717 [2024-07-23 04:15:56.312112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:05.717 Received shutdown signal, test time was about 31.270312 seconds 00:20:05.717 00:20:05.717 Latency(us) 00:20:05.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.718 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.718 Verification LBA range: start 0x0 length 0x4000 00:20:05.718 Nvme0n1 : 31.27 8816.70 34.44 0.00 0.00 14490.59 469.18 4026531.84 00:20:05.718 =================================================================================================================== 00:20:05.718 Total : 8816.70 34.44 0.00 0.00 14490.59 469.18 4026531.84 00:20:05.718 04:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.977 rmmod nvme_tcp 00:20:05.977 rmmod nvme_fabrics 00:20:05.977 rmmod nvme_keyring 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 92948 ']' 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 92948 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 92948 ']' 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 92948 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.977 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92948 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.236 killing process with pid 92948 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92948' 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 92948 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 92948 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.236 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:06.236 00:20:06.236 real 0m36.260s 00:20:06.236 user 1m56.380s 00:20:06.236 sys 0m11.520s 00:20:06.237 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.237 04:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:06.237 ************************************ 00:20:06.237 END TEST nvmf_host_multipath_status 00:20:06.237 ************************************ 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.496 ************************************ 00:20:06.496 START TEST nvmf_discovery_remove_ifc 00:20:06.496 ************************************ 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:06.496 * Looking for test storage... 00:20:06.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:06.496 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:06.497 Cannot find device "nvmf_tgt_br" 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.497 Cannot find device "nvmf_tgt_br2" 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:06.497 Cannot find device "nvmf_tgt_br" 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:06.497 Cannot find device "nvmf_tgt_br2" 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:20:06.497 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:06.756 04:15:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:06.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:20:06.756 00:20:06.756 --- 10.0.0.2 ping statistics --- 00:20:06.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.756 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:06.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:06.756 00:20:06.756 --- 10.0.0.3 ping statistics --- 00:20:06.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.756 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:06.756 00:20:06.756 --- 10.0.0.1 ping statistics --- 00:20:06.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.756 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=93753 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 93753 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 93753 ']' 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.756 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.015 [2024-07-23 04:16:00.123600] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:20:07.015 [2024-07-23 04:16:00.123675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.015 [2024-07-23 04:16:00.240039] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:07.015 [2024-07-23 04:16:00.262571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.015 [2024-07-23 04:16:00.328915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.015 [2024-07-23 04:16:00.328990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.015 [2024-07-23 04:16:00.329007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.015 [2024-07-23 04:16:00.329019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.015 [2024-07-23 04:16:00.329030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.015 [2024-07-23 04:16:00.329073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.274 [2024-07-23 04:16:00.386019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.274 [2024-07-23 04:16:00.497239] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.274 [2024-07-23 04:16:00.505380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:07.274 null0 00:20:07.274 [2024-07-23 04:16:00.537310] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=93778 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 93778 /tmp/host.sock 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 93778 ']' 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.274 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.274 04:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.274 [2024-07-23 04:16:00.603226] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:20:07.274 [2024-07-23 04:16:00.603312] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93778 ] 00:20:07.532 [2024-07-23 04:16:00.720510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:07.532 [2024-07-23 04:16:00.738642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.532 [2024-07-23 04:16:00.799968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:08.468 [2024-07-23 04:16:01.575729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.468 04:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:09.416 [2024-07-23 04:16:02.620159] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:09.416 [2024-07-23 04:16:02.620184] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:09.416 [2024-07-23 04:16:02.620217] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:09.416 [2024-07-23 04:16:02.626201] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:09.416 [2024-07-23 04:16:02.682932] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:09.416 [2024-07-23 04:16:02.683010] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:09.416 [2024-07-23 04:16:02.683066] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:09.416 [2024-07-23 04:16:02.683083] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:09.416 [2024-07-23 04:16:02.683102] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:09.416 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:09.417 [2024-07-23 04:16:02.688942] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20c8110 was disconnected and freed. delete nvme_qpair. 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:09.417 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:09.675 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.675 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:09.675 04:16:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:10.608 04:16:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.542 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:11.542 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.542 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:11.542 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:11.542 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.542 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:11.542 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:11.800 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.800 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:11.800 04:16:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:12.736 04:16:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:13.672 04:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:13.672 04:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.672 04:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.672 04:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:13.672 04:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:13.672 04:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:13.672 04:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:13.672 04:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.931 04:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:13.931 04:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:14.867 04:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:14.867 [2024-07-23 04:16:08.110822] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:14.867 [2024-07-23 04:16:08.110890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.867 [2024-07-23 04:16:08.110921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.867 [2024-07-23 04:16:08.110962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.867 [2024-07-23 04:16:08.110972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.867 [2024-07-23 04:16:08.110983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.867 [2024-07-23 04:16:08.110993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.867 [2024-07-23 04:16:08.111004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.867 [2024-07-23 04:16:08.111014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.867 [2024-07-23 04:16:08.111034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.867 [2024-07-23 04:16:08.111045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.867 [2024-07-23 04:16:08.111055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208caf0 is same with the state(5) to be set 00:20:14.867 [2024-07-23 04:16:08.120820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208caf0 (9): Bad file descriptor 00:20:14.867 [2024-07-23 04:16:08.130837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:15.803 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:15.803 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:15.803 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.803 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:15.803 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:15.803 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:15.804 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:16.062 [2024-07-23 04:16:09.186989] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:16.062 [2024-07-23 04:16:09.187102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208caf0 with addr=10.0.0.2, port=4420 00:20:16.062 [2024-07-23 04:16:09.187127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208caf0 is same with the state(5) to be set 00:20:16.062 [2024-07-23 04:16:09.187170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208caf0 (9): Bad file descriptor 00:20:16.062 [2024-07-23 04:16:09.187779] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:16.062 [2024-07-23 04:16:09.187816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:16.062 [2024-07-23 04:16:09.187835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:16.062 [2024-07-23 04:16:09.187852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:16.062 [2024-07-23 04:16:09.187881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:16.062 [2024-07-23 04:16:09.187933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:16.062 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.062 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:16.062 04:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:17.051 [2024-07-23 04:16:10.187975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:17.051 [2024-07-23 04:16:10.188023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:17.051 [2024-07-23 04:16:10.188050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:17.051 [2024-07-23 04:16:10.188059] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:17.051 [2024-07-23 04:16:10.188075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:17.051 [2024-07-23 04:16:10.188098] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:17.051 [2024-07-23 04:16:10.188129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.051 [2024-07-23 04:16:10.188142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.051 [2024-07-23 04:16:10.188154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.051 [2024-07-23 04:16:10.188163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.051 [2024-07-23 04:16:10.188172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.051 [2024-07-23 04:16:10.188180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.051 [2024-07-23 04:16:10.188189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.051 [2024-07-23 04:16:10.188197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.051 [2024-07-23 04:16:10.188206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.051 [2024-07-23 04:16:10.188214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.051 [2024-07-23 04:16:10.188222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:17.051 [2024-07-23 04:16:10.188252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208c0e0 (9): Bad file descriptor 00:20:17.051 [2024-07-23 04:16:10.189248] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:17.051 [2024-07-23 04:16:10.189286] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:17.051 04:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:18.428 04:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:18.996 [2024-07-23 04:16:12.195053] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:18.996 [2024-07-23 04:16:12.195081] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:18.996 [2024-07-23 04:16:12.195115] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:18.996 [2024-07-23 04:16:12.201085] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:18.996 [2024-07-23 04:16:12.257050] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:18.996 [2024-07-23 04:16:12.257113] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:18.996 [2024-07-23 04:16:12.257136] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:18.996 [2024-07-23 04:16:12.257150] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:18.996 [2024-07-23 04:16:12.257158] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:18.996 [2024-07-23 04:16:12.263721] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x209ada0 was disconnected and freed. delete nvme_qpair. 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 93778 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 93778 ']' 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 93778 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93778 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:19.255 killing process with pid 93778 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93778' 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 93778 00:20:19.255 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 93778 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:19.513 rmmod nvme_tcp 00:20:19.513 rmmod nvme_fabrics 00:20:19.513 rmmod nvme_keyring 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 93753 ']' 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 93753 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 93753 ']' 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 93753 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93753 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93753' 00:20:19.513 killing process with pid 93753 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 93753 00:20:19.513 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 93753 00:20:19.773 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:19.773 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:19.773 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:19.773 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.773 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:19.773 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.773 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.773 04:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:19.773 ************************************ 00:20:19.773 END TEST nvmf_discovery_remove_ifc 00:20:19.773 ************************************ 00:20:19.773 00:20:19.773 real 0m13.400s 00:20:19.773 user 0m23.331s 00:20:19.773 sys 0m2.489s 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.773 ************************************ 00:20:19.773 START TEST nvmf_identify_kernel_target 00:20:19.773 ************************************ 00:20:19.773 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:20.033 * Looking for test storage... 00:20:20.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:20.033 Cannot find device "nvmf_tgt_br" 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.033 Cannot find device "nvmf_tgt_br2" 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:20.033 Cannot find device "nvmf_tgt_br" 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:20.033 Cannot find device "nvmf_tgt_br2" 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:20.033 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:20.034 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:20.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:20:20.293 00:20:20.293 --- 10.0.0.2 ping statistics --- 00:20:20.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.293 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:20.293 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:20.293 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:20:20.293 00:20:20.293 --- 10.0.0.3 ping statistics --- 00:20:20.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.293 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:20.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:20.293 00:20:20.293 --- 10.0.0.1 ping statistics --- 00:20:20.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.293 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:20.293 04:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:20.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:20.811 Waiting for block devices as requested 00:20:20.811 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.811 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:20.811 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:21.071 No valid GPT data, bailing 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:21.071 No valid GPT data, bailing 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:21.071 No valid GPT data, bailing 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:21.071 No valid GPT data, bailing 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:21.071 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:20:21.072 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:21.331 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -a 10.0.0.1 -t tcp -s 4420 00:20:21.331 00:20:21.331 Discovery Log Number of Records 2, Generation counter 2 00:20:21.331 =====Discovery Log Entry 0====== 00:20:21.331 trtype: tcp 00:20:21.331 adrfam: ipv4 00:20:21.331 subtype: current discovery subsystem 00:20:21.331 treq: not specified, sq flow control disable supported 00:20:21.331 portid: 1 00:20:21.331 trsvcid: 4420 00:20:21.331 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:21.331 traddr: 10.0.0.1 00:20:21.331 eflags: none 00:20:21.331 sectype: none 00:20:21.331 =====Discovery Log Entry 1====== 00:20:21.331 trtype: tcp 00:20:21.331 adrfam: ipv4 00:20:21.331 subtype: nvme subsystem 00:20:21.331 treq: not specified, sq flow control disable supported 00:20:21.331 portid: 1 00:20:21.331 trsvcid: 4420 00:20:21.331 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:21.331 traddr: 10.0.0.1 00:20:21.331 eflags: none 00:20:21.331 sectype: none 00:20:21.331 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:21.331 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:21.331 ===================================================== 00:20:21.331 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:21.331 ===================================================== 00:20:21.331 Controller Capabilities/Features 00:20:21.331 ================================ 00:20:21.331 Vendor ID: 0000 00:20:21.331 Subsystem Vendor ID: 0000 00:20:21.331 Serial Number: 59dc1a1d1b05b58fb0b7 00:20:21.331 Model Number: Linux 00:20:21.331 Firmware Version: 6.7.0-68 00:20:21.331 Recommended Arb Burst: 0 00:20:21.331 IEEE OUI Identifier: 00 00 00 00:20:21.331 Multi-path I/O 00:20:21.331 May have multiple subsystem ports: No 00:20:21.331 May have multiple controllers: No 00:20:21.331 Associated with SR-IOV VF: No 00:20:21.331 Max Data Transfer Size: Unlimited 00:20:21.331 Max Number of Namespaces: 0 00:20:21.331 Max Number of I/O Queues: 1024 00:20:21.331 NVMe Specification Version (VS): 1.3 00:20:21.331 NVMe Specification Version (Identify): 1.3 00:20:21.331 Maximum Queue Entries: 1024 00:20:21.331 Contiguous Queues Required: No 00:20:21.331 Arbitration Mechanisms Supported 00:20:21.331 Weighted Round Robin: Not Supported 00:20:21.331 Vendor Specific: Not Supported 00:20:21.331 Reset Timeout: 7500 ms 00:20:21.331 Doorbell Stride: 4 bytes 00:20:21.331 NVM Subsystem Reset: Not Supported 00:20:21.331 Command Sets Supported 00:20:21.331 NVM Command Set: Supported 00:20:21.331 Boot Partition: Not Supported 00:20:21.331 Memory Page Size Minimum: 4096 bytes 00:20:21.331 Memory Page Size Maximum: 4096 bytes 00:20:21.331 Persistent Memory Region: Not Supported 00:20:21.331 Optional Asynchronous Events Supported 00:20:21.331 Namespace Attribute Notices: Not Supported 00:20:21.331 Firmware Activation Notices: Not Supported 00:20:21.331 ANA Change Notices: Not Supported 00:20:21.331 PLE Aggregate Log Change Notices: Not Supported 00:20:21.332 LBA Status Info Alert Notices: Not Supported 00:20:21.332 EGE Aggregate Log Change Notices: Not Supported 00:20:21.332 Normal NVM Subsystem Shutdown event: Not Supported 00:20:21.332 Zone Descriptor Change Notices: Not Supported 00:20:21.332 Discovery Log Change Notices: Supported 00:20:21.332 Controller Attributes 00:20:21.332 128-bit Host Identifier: Not Supported 00:20:21.332 Non-Operational Permissive Mode: Not Supported 00:20:21.332 NVM Sets: Not Supported 00:20:21.332 Read Recovery Levels: Not Supported 00:20:21.332 Endurance Groups: Not Supported 00:20:21.332 Predictable Latency Mode: Not Supported 00:20:21.332 Traffic Based Keep ALive: Not Supported 00:20:21.332 Namespace Granularity: Not Supported 00:20:21.332 SQ Associations: Not Supported 00:20:21.332 UUID List: Not Supported 00:20:21.332 Multi-Domain Subsystem: Not Supported 00:20:21.332 Fixed Capacity Management: Not Supported 00:20:21.332 Variable Capacity Management: Not Supported 00:20:21.332 Delete Endurance Group: Not Supported 00:20:21.332 Delete NVM Set: Not Supported 00:20:21.332 Extended LBA Formats Supported: Not Supported 00:20:21.332 Flexible Data Placement Supported: Not Supported 00:20:21.332 00:20:21.332 Controller Memory Buffer Support 00:20:21.332 ================================ 00:20:21.332 Supported: No 00:20:21.332 00:20:21.332 Persistent Memory Region Support 00:20:21.332 ================================ 00:20:21.332 Supported: No 00:20:21.332 00:20:21.332 Admin Command Set Attributes 00:20:21.332 ============================ 00:20:21.332 Security Send/Receive: Not Supported 00:20:21.332 Format NVM: Not Supported 00:20:21.332 Firmware Activate/Download: Not Supported 00:20:21.332 Namespace Management: Not Supported 00:20:21.332 Device Self-Test: Not Supported 00:20:21.332 Directives: Not Supported 00:20:21.332 NVMe-MI: Not Supported 00:20:21.332 Virtualization Management: Not Supported 00:20:21.332 Doorbell Buffer Config: Not Supported 00:20:21.332 Get LBA Status Capability: Not Supported 00:20:21.332 Command & Feature Lockdown Capability: Not Supported 00:20:21.332 Abort Command Limit: 1 00:20:21.332 Async Event Request Limit: 1 00:20:21.332 Number of Firmware Slots: N/A 00:20:21.332 Firmware Slot 1 Read-Only: N/A 00:20:21.332 Firmware Activation Without Reset: N/A 00:20:21.332 Multiple Update Detection Support: N/A 00:20:21.332 Firmware Update Granularity: No Information Provided 00:20:21.332 Per-Namespace SMART Log: No 00:20:21.332 Asymmetric Namespace Access Log Page: Not Supported 00:20:21.332 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:21.332 Command Effects Log Page: Not Supported 00:20:21.332 Get Log Page Extended Data: Supported 00:20:21.332 Telemetry Log Pages: Not Supported 00:20:21.332 Persistent Event Log Pages: Not Supported 00:20:21.332 Supported Log Pages Log Page: May Support 00:20:21.332 Commands Supported & Effects Log Page: Not Supported 00:20:21.332 Feature Identifiers & Effects Log Page:May Support 00:20:21.332 NVMe-MI Commands & Effects Log Page: May Support 00:20:21.332 Data Area 4 for Telemetry Log: Not Supported 00:20:21.332 Error Log Page Entries Supported: 1 00:20:21.332 Keep Alive: Not Supported 00:20:21.332 00:20:21.332 NVM Command Set Attributes 00:20:21.332 ========================== 00:20:21.332 Submission Queue Entry Size 00:20:21.332 Max: 1 00:20:21.332 Min: 1 00:20:21.332 Completion Queue Entry Size 00:20:21.332 Max: 1 00:20:21.332 Min: 1 00:20:21.332 Number of Namespaces: 0 00:20:21.332 Compare Command: Not Supported 00:20:21.332 Write Uncorrectable Command: Not Supported 00:20:21.332 Dataset Management Command: Not Supported 00:20:21.332 Write Zeroes Command: Not Supported 00:20:21.332 Set Features Save Field: Not Supported 00:20:21.332 Reservations: Not Supported 00:20:21.332 Timestamp: Not Supported 00:20:21.332 Copy: Not Supported 00:20:21.332 Volatile Write Cache: Not Present 00:20:21.332 Atomic Write Unit (Normal): 1 00:20:21.332 Atomic Write Unit (PFail): 1 00:20:21.332 Atomic Compare & Write Unit: 1 00:20:21.332 Fused Compare & Write: Not Supported 00:20:21.332 Scatter-Gather List 00:20:21.332 SGL Command Set: Supported 00:20:21.332 SGL Keyed: Not Supported 00:20:21.332 SGL Bit Bucket Descriptor: Not Supported 00:20:21.332 SGL Metadata Pointer: Not Supported 00:20:21.332 Oversized SGL: Not Supported 00:20:21.332 SGL Metadata Address: Not Supported 00:20:21.332 SGL Offset: Supported 00:20:21.332 Transport SGL Data Block: Not Supported 00:20:21.332 Replay Protected Memory Block: Not Supported 00:20:21.332 00:20:21.332 Firmware Slot Information 00:20:21.332 ========================= 00:20:21.332 Active slot: 0 00:20:21.332 00:20:21.332 00:20:21.332 Error Log 00:20:21.332 ========= 00:20:21.332 00:20:21.332 Active Namespaces 00:20:21.332 ================= 00:20:21.332 Discovery Log Page 00:20:21.332 ================== 00:20:21.332 Generation Counter: 2 00:20:21.332 Number of Records: 2 00:20:21.332 Record Format: 0 00:20:21.332 00:20:21.332 Discovery Log Entry 0 00:20:21.332 ---------------------- 00:20:21.332 Transport Type: 3 (TCP) 00:20:21.332 Address Family: 1 (IPv4) 00:20:21.332 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:21.332 Entry Flags: 00:20:21.332 Duplicate Returned Information: 0 00:20:21.332 Explicit Persistent Connection Support for Discovery: 0 00:20:21.332 Transport Requirements: 00:20:21.332 Secure Channel: Not Specified 00:20:21.332 Port ID: 1 (0x0001) 00:20:21.332 Controller ID: 65535 (0xffff) 00:20:21.332 Admin Max SQ Size: 32 00:20:21.332 Transport Service Identifier: 4420 00:20:21.332 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:21.332 Transport Address: 10.0.0.1 00:20:21.332 Discovery Log Entry 1 00:20:21.332 ---------------------- 00:20:21.332 Transport Type: 3 (TCP) 00:20:21.332 Address Family: 1 (IPv4) 00:20:21.332 Subsystem Type: 2 (NVM Subsystem) 00:20:21.332 Entry Flags: 00:20:21.332 Duplicate Returned Information: 0 00:20:21.332 Explicit Persistent Connection Support for Discovery: 0 00:20:21.332 Transport Requirements: 00:20:21.332 Secure Channel: Not Specified 00:20:21.332 Port ID: 1 (0x0001) 00:20:21.332 Controller ID: 65535 (0xffff) 00:20:21.332 Admin Max SQ Size: 32 00:20:21.332 Transport Service Identifier: 4420 00:20:21.332 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:21.332 Transport Address: 10.0.0.1 00:20:21.332 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:21.592 get_feature(0x01) failed 00:20:21.592 get_feature(0x02) failed 00:20:21.592 get_feature(0x04) failed 00:20:21.592 ===================================================== 00:20:21.592 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:21.592 ===================================================== 00:20:21.592 Controller Capabilities/Features 00:20:21.592 ================================ 00:20:21.592 Vendor ID: 0000 00:20:21.592 Subsystem Vendor ID: 0000 00:20:21.592 Serial Number: 9ad9b722b50442998a5e 00:20:21.592 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:21.592 Firmware Version: 6.7.0-68 00:20:21.592 Recommended Arb Burst: 6 00:20:21.592 IEEE OUI Identifier: 00 00 00 00:20:21.592 Multi-path I/O 00:20:21.592 May have multiple subsystem ports: Yes 00:20:21.592 May have multiple controllers: Yes 00:20:21.592 Associated with SR-IOV VF: No 00:20:21.592 Max Data Transfer Size: Unlimited 00:20:21.592 Max Number of Namespaces: 1024 00:20:21.592 Max Number of I/O Queues: 128 00:20:21.592 NVMe Specification Version (VS): 1.3 00:20:21.592 NVMe Specification Version (Identify): 1.3 00:20:21.592 Maximum Queue Entries: 1024 00:20:21.592 Contiguous Queues Required: No 00:20:21.592 Arbitration Mechanisms Supported 00:20:21.592 Weighted Round Robin: Not Supported 00:20:21.592 Vendor Specific: Not Supported 00:20:21.592 Reset Timeout: 7500 ms 00:20:21.592 Doorbell Stride: 4 bytes 00:20:21.592 NVM Subsystem Reset: Not Supported 00:20:21.592 Command Sets Supported 00:20:21.592 NVM Command Set: Supported 00:20:21.592 Boot Partition: Not Supported 00:20:21.592 Memory Page Size Minimum: 4096 bytes 00:20:21.592 Memory Page Size Maximum: 4096 bytes 00:20:21.592 Persistent Memory Region: Not Supported 00:20:21.592 Optional Asynchronous Events Supported 00:20:21.592 Namespace Attribute Notices: Supported 00:20:21.592 Firmware Activation Notices: Not Supported 00:20:21.592 ANA Change Notices: Supported 00:20:21.592 PLE Aggregate Log Change Notices: Not Supported 00:20:21.592 LBA Status Info Alert Notices: Not Supported 00:20:21.592 EGE Aggregate Log Change Notices: Not Supported 00:20:21.592 Normal NVM Subsystem Shutdown event: Not Supported 00:20:21.592 Zone Descriptor Change Notices: Not Supported 00:20:21.592 Discovery Log Change Notices: Not Supported 00:20:21.592 Controller Attributes 00:20:21.592 128-bit Host Identifier: Supported 00:20:21.592 Non-Operational Permissive Mode: Not Supported 00:20:21.592 NVM Sets: Not Supported 00:20:21.592 Read Recovery Levels: Not Supported 00:20:21.592 Endurance Groups: Not Supported 00:20:21.592 Predictable Latency Mode: Not Supported 00:20:21.592 Traffic Based Keep ALive: Supported 00:20:21.592 Namespace Granularity: Not Supported 00:20:21.592 SQ Associations: Not Supported 00:20:21.592 UUID List: Not Supported 00:20:21.592 Multi-Domain Subsystem: Not Supported 00:20:21.592 Fixed Capacity Management: Not Supported 00:20:21.592 Variable Capacity Management: Not Supported 00:20:21.592 Delete Endurance Group: Not Supported 00:20:21.592 Delete NVM Set: Not Supported 00:20:21.592 Extended LBA Formats Supported: Not Supported 00:20:21.592 Flexible Data Placement Supported: Not Supported 00:20:21.592 00:20:21.592 Controller Memory Buffer Support 00:20:21.592 ================================ 00:20:21.592 Supported: No 00:20:21.592 00:20:21.592 Persistent Memory Region Support 00:20:21.592 ================================ 00:20:21.592 Supported: No 00:20:21.592 00:20:21.592 Admin Command Set Attributes 00:20:21.592 ============================ 00:20:21.592 Security Send/Receive: Not Supported 00:20:21.592 Format NVM: Not Supported 00:20:21.592 Firmware Activate/Download: Not Supported 00:20:21.592 Namespace Management: Not Supported 00:20:21.592 Device Self-Test: Not Supported 00:20:21.592 Directives: Not Supported 00:20:21.592 NVMe-MI: Not Supported 00:20:21.592 Virtualization Management: Not Supported 00:20:21.592 Doorbell Buffer Config: Not Supported 00:20:21.592 Get LBA Status Capability: Not Supported 00:20:21.592 Command & Feature Lockdown Capability: Not Supported 00:20:21.592 Abort Command Limit: 4 00:20:21.592 Async Event Request Limit: 4 00:20:21.592 Number of Firmware Slots: N/A 00:20:21.592 Firmware Slot 1 Read-Only: N/A 00:20:21.592 Firmware Activation Without Reset: N/A 00:20:21.592 Multiple Update Detection Support: N/A 00:20:21.592 Firmware Update Granularity: No Information Provided 00:20:21.592 Per-Namespace SMART Log: Yes 00:20:21.592 Asymmetric Namespace Access Log Page: Supported 00:20:21.592 ANA Transition Time : 10 sec 00:20:21.592 00:20:21.592 Asymmetric Namespace Access Capabilities 00:20:21.592 ANA Optimized State : Supported 00:20:21.592 ANA Non-Optimized State : Supported 00:20:21.592 ANA Inaccessible State : Supported 00:20:21.592 ANA Persistent Loss State : Supported 00:20:21.592 ANA Change State : Supported 00:20:21.592 ANAGRPID is not changed : No 00:20:21.592 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:21.592 00:20:21.592 ANA Group Identifier Maximum : 128 00:20:21.592 Number of ANA Group Identifiers : 128 00:20:21.592 Max Number of Allowed Namespaces : 1024 00:20:21.592 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:21.592 Command Effects Log Page: Supported 00:20:21.592 Get Log Page Extended Data: Supported 00:20:21.592 Telemetry Log Pages: Not Supported 00:20:21.592 Persistent Event Log Pages: Not Supported 00:20:21.592 Supported Log Pages Log Page: May Support 00:20:21.592 Commands Supported & Effects Log Page: Not Supported 00:20:21.592 Feature Identifiers & Effects Log Page:May Support 00:20:21.593 NVMe-MI Commands & Effects Log Page: May Support 00:20:21.593 Data Area 4 for Telemetry Log: Not Supported 00:20:21.593 Error Log Page Entries Supported: 128 00:20:21.593 Keep Alive: Supported 00:20:21.593 Keep Alive Granularity: 1000 ms 00:20:21.593 00:20:21.593 NVM Command Set Attributes 00:20:21.593 ========================== 00:20:21.593 Submission Queue Entry Size 00:20:21.593 Max: 64 00:20:21.593 Min: 64 00:20:21.593 Completion Queue Entry Size 00:20:21.593 Max: 16 00:20:21.593 Min: 16 00:20:21.593 Number of Namespaces: 1024 00:20:21.593 Compare Command: Not Supported 00:20:21.593 Write Uncorrectable Command: Not Supported 00:20:21.593 Dataset Management Command: Supported 00:20:21.593 Write Zeroes Command: Supported 00:20:21.593 Set Features Save Field: Not Supported 00:20:21.593 Reservations: Not Supported 00:20:21.593 Timestamp: Not Supported 00:20:21.593 Copy: Not Supported 00:20:21.593 Volatile Write Cache: Present 00:20:21.593 Atomic Write Unit (Normal): 1 00:20:21.593 Atomic Write Unit (PFail): 1 00:20:21.593 Atomic Compare & Write Unit: 1 00:20:21.593 Fused Compare & Write: Not Supported 00:20:21.593 Scatter-Gather List 00:20:21.593 SGL Command Set: Supported 00:20:21.593 SGL Keyed: Not Supported 00:20:21.593 SGL Bit Bucket Descriptor: Not Supported 00:20:21.593 SGL Metadata Pointer: Not Supported 00:20:21.593 Oversized SGL: Not Supported 00:20:21.593 SGL Metadata Address: Not Supported 00:20:21.593 SGL Offset: Supported 00:20:21.593 Transport SGL Data Block: Not Supported 00:20:21.593 Replay Protected Memory Block: Not Supported 00:20:21.593 00:20:21.593 Firmware Slot Information 00:20:21.593 ========================= 00:20:21.593 Active slot: 0 00:20:21.593 00:20:21.593 Asymmetric Namespace Access 00:20:21.593 =========================== 00:20:21.593 Change Count : 0 00:20:21.593 Number of ANA Group Descriptors : 1 00:20:21.593 ANA Group Descriptor : 0 00:20:21.593 ANA Group ID : 1 00:20:21.593 Number of NSID Values : 1 00:20:21.593 Change Count : 0 00:20:21.593 ANA State : 1 00:20:21.593 Namespace Identifier : 1 00:20:21.593 00:20:21.593 Commands Supported and Effects 00:20:21.593 ============================== 00:20:21.593 Admin Commands 00:20:21.593 -------------- 00:20:21.593 Get Log Page (02h): Supported 00:20:21.593 Identify (06h): Supported 00:20:21.593 Abort (08h): Supported 00:20:21.593 Set Features (09h): Supported 00:20:21.593 Get Features (0Ah): Supported 00:20:21.593 Asynchronous Event Request (0Ch): Supported 00:20:21.593 Keep Alive (18h): Supported 00:20:21.593 I/O Commands 00:20:21.593 ------------ 00:20:21.593 Flush (00h): Supported 00:20:21.593 Write (01h): Supported LBA-Change 00:20:21.593 Read (02h): Supported 00:20:21.593 Write Zeroes (08h): Supported LBA-Change 00:20:21.593 Dataset Management (09h): Supported 00:20:21.593 00:20:21.593 Error Log 00:20:21.593 ========= 00:20:21.593 Entry: 0 00:20:21.593 Error Count: 0x3 00:20:21.593 Submission Queue Id: 0x0 00:20:21.593 Command Id: 0x5 00:20:21.593 Phase Bit: 0 00:20:21.593 Status Code: 0x2 00:20:21.593 Status Code Type: 0x0 00:20:21.593 Do Not Retry: 1 00:20:21.593 Error Location: 0x28 00:20:21.593 LBA: 0x0 00:20:21.593 Namespace: 0x0 00:20:21.593 Vendor Log Page: 0x0 00:20:21.593 ----------- 00:20:21.593 Entry: 1 00:20:21.593 Error Count: 0x2 00:20:21.593 Submission Queue Id: 0x0 00:20:21.593 Command Id: 0x5 00:20:21.593 Phase Bit: 0 00:20:21.593 Status Code: 0x2 00:20:21.593 Status Code Type: 0x0 00:20:21.593 Do Not Retry: 1 00:20:21.593 Error Location: 0x28 00:20:21.593 LBA: 0x0 00:20:21.593 Namespace: 0x0 00:20:21.593 Vendor Log Page: 0x0 00:20:21.593 ----------- 00:20:21.593 Entry: 2 00:20:21.593 Error Count: 0x1 00:20:21.593 Submission Queue Id: 0x0 00:20:21.593 Command Id: 0x4 00:20:21.593 Phase Bit: 0 00:20:21.593 Status Code: 0x2 00:20:21.593 Status Code Type: 0x0 00:20:21.593 Do Not Retry: 1 00:20:21.593 Error Location: 0x28 00:20:21.593 LBA: 0x0 00:20:21.593 Namespace: 0x0 00:20:21.593 Vendor Log Page: 0x0 00:20:21.593 00:20:21.593 Number of Queues 00:20:21.593 ================ 00:20:21.593 Number of I/O Submission Queues: 128 00:20:21.593 Number of I/O Completion Queues: 128 00:20:21.593 00:20:21.593 ZNS Specific Controller Data 00:20:21.593 ============================ 00:20:21.593 Zone Append Size Limit: 0 00:20:21.593 00:20:21.593 00:20:21.593 Active Namespaces 00:20:21.593 ================= 00:20:21.593 get_feature(0x05) failed 00:20:21.593 Namespace ID:1 00:20:21.593 Command Set Identifier: NVM (00h) 00:20:21.593 Deallocate: Supported 00:20:21.593 Deallocated/Unwritten Error: Not Supported 00:20:21.593 Deallocated Read Value: Unknown 00:20:21.593 Deallocate in Write Zeroes: Not Supported 00:20:21.593 Deallocated Guard Field: 0xFFFF 00:20:21.593 Flush: Supported 00:20:21.593 Reservation: Not Supported 00:20:21.593 Namespace Sharing Capabilities: Multiple Controllers 00:20:21.593 Size (in LBAs): 1310720 (5GiB) 00:20:21.593 Capacity (in LBAs): 1310720 (5GiB) 00:20:21.593 Utilization (in LBAs): 1310720 (5GiB) 00:20:21.593 UUID: 7f5cc697-8dab-48b4-b508-79c362846fd8 00:20:21.593 Thin Provisioning: Not Supported 00:20:21.593 Per-NS Atomic Units: Yes 00:20:21.593 Atomic Boundary Size (Normal): 0 00:20:21.593 Atomic Boundary Size (PFail): 0 00:20:21.593 Atomic Boundary Offset: 0 00:20:21.593 NGUID/EUI64 Never Reused: No 00:20:21.593 ANA group ID: 1 00:20:21.593 Namespace Write Protected: No 00:20:21.593 Number of LBA Formats: 1 00:20:21.593 Current LBA Format: LBA Format #00 00:20:21.593 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:21.593 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.593 rmmod nvme_tcp 00:20:21.593 rmmod nvme_fabrics 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:21.593 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:20:21.853 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:21.853 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:21.853 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:21.853 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:21.853 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:21.853 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:21.853 04:16:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:22.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:22.420 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:22.680 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:22.680 00:20:22.680 real 0m2.753s 00:20:22.680 user 0m0.927s 00:20:22.680 sys 0m1.298s 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.680 ************************************ 00:20:22.680 END TEST nvmf_identify_kernel_target 00:20:22.680 ************************************ 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.680 ************************************ 00:20:22.680 START TEST nvmf_auth_host 00:20:22.680 ************************************ 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:22.680 * Looking for test storage... 00:20:22.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:22.680 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:22.681 04:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.681 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:22.940 Cannot find device "nvmf_tgt_br" 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.940 Cannot find device "nvmf_tgt_br2" 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:22.940 Cannot find device "nvmf_tgt_br" 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:22.940 Cannot find device "nvmf_tgt_br2" 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:22.940 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:23.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:23.199 00:20:23.199 --- 10.0.0.2 ping statistics --- 00:20:23.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.199 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:23.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:20:23.199 00:20:23.199 --- 10.0.0.3 ping statistics --- 00:20:23.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.199 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:20:23.199 00:20:23.199 --- 10.0.0.1 ping statistics --- 00:20:23.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.199 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=94659 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 94659 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 94659 ']' 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.199 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=241a00b16db8fe588a231fc375f348ff 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ejc 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 241a00b16db8fe588a231fc375f348ff 0 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 241a00b16db8fe588a231fc375f348ff 0 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=241a00b16db8fe588a231fc375f348ff 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:23.459 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ejc 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ejc 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ejc 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=40f4275a101d226669ece89f6a39f194fe6164eb138ac3c9fef8fd27bcb488a8 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.f4b 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 40f4275a101d226669ece89f6a39f194fe6164eb138ac3c9fef8fd27bcb488a8 3 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 40f4275a101d226669ece89f6a39f194fe6164eb138ac3c9fef8fd27bcb488a8 3 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=40f4275a101d226669ece89f6a39f194fe6164eb138ac3c9fef8fd27bcb488a8 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.f4b 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.f4b 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.f4b 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=67143b793f6664223c1f9508c45cb114299aad88ad318a86 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Pxc 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 67143b793f6664223c1f9508c45cb114299aad88ad318a86 0 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 67143b793f6664223c1f9508c45cb114299aad88ad318a86 0 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=67143b793f6664223c1f9508c45cb114299aad88ad318a86 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Pxc 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Pxc 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Pxc 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=571e63fb750678eda48f118819a34aa226c82c6b6b5dd703 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Q7b 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 571e63fb750678eda48f118819a34aa226c82c6b6b5dd703 2 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 571e63fb750678eda48f118819a34aa226c82c6b6b5dd703 2 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=571e63fb750678eda48f118819a34aa226c82c6b6b5dd703 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:20:23.722 04:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Q7b 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Q7b 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Q7b 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cbccfceb3ded6ae5cf3f4b0190c86830 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Omo 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cbccfceb3ded6ae5cf3f4b0190c86830 1 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cbccfceb3ded6ae5cf3f4b0190c86830 1 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cbccfceb3ded6ae5cf3f4b0190c86830 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:20:23.722 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Omo 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Omo 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Omo 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d61b8e5658b0009488ca8909018ea4f 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JSg 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d61b8e5658b0009488ca8909018ea4f 1 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d61b8e5658b0009488ca8909018ea4f 1 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d61b8e5658b0009488ca8909018ea4f 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JSg 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JSg 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.JSg 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e6ff4c3738fb5d7dbc1e6d312190d0fbd7e7f9686934aa9f 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Slt 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e6ff4c3738fb5d7dbc1e6d312190d0fbd7e7f9686934aa9f 2 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e6ff4c3738fb5d7dbc1e6d312190d0fbd7e7f9686934aa9f 2 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e6ff4c3738fb5d7dbc1e6d312190d0fbd7e7f9686934aa9f 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:20:23.981 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Slt 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Slt 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Slt 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d333102609fc20e443fea4a9488463f9 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nNQ 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d333102609fc20e443fea4a9488463f9 0 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d333102609fc20e443fea4a9488463f9 0 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d333102609fc20e443fea4a9488463f9 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nNQ 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nNQ 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nNQ 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=02e037f91117bacc25fd2224b3c7f7dbbe792a34328d2b669b75193eb759f68b 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.vPS 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 02e037f91117bacc25fd2224b3c7f7dbbe792a34328d2b669b75193eb759f68b 3 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 02e037f91117bacc25fd2224b3c7f7dbbe792a34328d2b669b75193eb759f68b 3 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=02e037f91117bacc25fd2224b3c7f7dbbe792a34328d2b669b75193eb759f68b 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:20:23.982 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:24.240 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.vPS 00:20:24.240 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.vPS 00:20:24.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.240 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.vPS 00:20:24.240 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:24.240 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 94659 00:20:24.240 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 94659 ']' 00:20:24.240 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.240 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.241 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.241 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.241 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ejc 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.f4b ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.f4b 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Pxc 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Q7b ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Q7b 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Omo 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.JSg ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JSg 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Slt 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nNQ ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nNQ 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vPS 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:24.500 04:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:24.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:25.017 Waiting for block devices as requested 00:20:25.017 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:25.017 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:25.583 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:25.583 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:25.583 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:25.583 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:25.583 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:25.583 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:25.584 No valid GPT data, bailing 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:25.584 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:25.842 No valid GPT data, bailing 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:25.842 04:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:25.842 No valid GPT data, bailing 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:25.842 No valid GPT data, bailing 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:25.842 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -a 10.0.0.1 -t tcp -s 4420 00:20:26.101 00:20:26.101 Discovery Log Number of Records 2, Generation counter 2 00:20:26.101 =====Discovery Log Entry 0====== 00:20:26.101 trtype: tcp 00:20:26.101 adrfam: ipv4 00:20:26.101 subtype: current discovery subsystem 00:20:26.101 treq: not specified, sq flow control disable supported 00:20:26.101 portid: 1 00:20:26.101 trsvcid: 4420 00:20:26.101 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:26.101 traddr: 10.0.0.1 00:20:26.101 eflags: none 00:20:26.101 sectype: none 00:20:26.101 =====Discovery Log Entry 1====== 00:20:26.101 trtype: tcp 00:20:26.101 adrfam: ipv4 00:20:26.101 subtype: nvme subsystem 00:20:26.101 treq: not specified, sq flow control disable supported 00:20:26.101 portid: 1 00:20:26.101 trsvcid: 4420 00:20:26.101 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:26.101 traddr: 10.0.0.1 00:20:26.101 eflags: none 00:20:26.101 sectype: none 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.101 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.102 nvme0n1 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.102 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.361 nvme0n1 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.361 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.362 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.621 nvme0n1 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:26.621 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.622 nvme0n1 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.622 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.881 04:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.881 nvme0n1 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.881 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.140 nvme0n1 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.140 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.141 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:27.400 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:27.400 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:27.400 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.401 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.660 nvme0n1 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:27.660 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.661 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:27.661 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:27.661 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:27.661 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.661 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.661 04:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.919 nvme0n1 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:27.919 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.920 nvme0n1 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.920 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.179 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.179 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.179 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.179 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.179 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.179 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.180 nvme0n1 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.180 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.439 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.439 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.440 nvme0n1 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.440 04:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:29.007 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:29.007 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:29.007 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:29.007 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.008 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.267 nvme0n1 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.267 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.526 nvme0n1 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.526 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.527 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.785 nvme0n1 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.785 04:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.785 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.046 nvme0n1 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.046 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.047 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.307 nvme0n1 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.307 04:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:31.705 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:31.705 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:31.705 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.706 04:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.964 nvme0n1 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.964 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.223 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.481 nvme0n1 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.481 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.482 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.741 nvme0n1 00:20:32.741 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.741 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.741 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.741 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.741 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.741 04:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.741 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.309 nvme0n1 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.309 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.568 nvme0n1 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.568 04:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.136 nvme0n1 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.136 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.137 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.137 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.704 nvme0n1 00:20:34.704 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.704 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.704 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.704 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.704 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.704 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.704 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.705 04:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.705 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.273 nvme0n1 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.273 04:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.840 nvme0n1 00:20:35.840 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.841 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.099 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.100 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.668 nvme0n1 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.668 nvme0n1 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.668 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.669 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.669 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.669 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.669 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.669 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.669 04:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.927 nvme0n1 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:36.927 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.928 nvme0n1 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.928 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.187 nvme0n1 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.187 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.447 nvme0n1 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.447 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.706 nvme0n1 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.706 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.707 04:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.707 nvme0n1 00:20:37.707 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.707 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.707 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.707 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.707 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.967 nvme0n1 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.967 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 nvme0n1 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.226 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.227 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.485 nvme0n1 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:38.485 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.486 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.744 nvme0n1 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.744 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.745 04:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.003 nvme0n1 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.003 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.004 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.263 nvme0n1 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.263 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.522 nvme0n1 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.522 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.781 nvme0n1 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.781 04:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.781 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.039 nvme0n1 00:20:40.039 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.039 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.039 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.039 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.039 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:40.040 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:40.297 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.298 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.555 nvme0n1 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.555 04:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.814 nvme0n1 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.814 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.380 nvme0n1 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.380 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.381 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.638 nvme0n1 00:20:41.638 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.638 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.638 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.638 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.639 04:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.205 nvme0n1 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:42.205 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.206 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.774 nvme0n1 00:20:42.774 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.774 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.774 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.774 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.774 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.774 04:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.774 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.774 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.774 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.774 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.774 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.774 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.774 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:42.774 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.775 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.341 nvme0n1 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:43.341 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.342 04:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.907 nvme0n1 00:20:43.907 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.907 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.907 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.907 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.907 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.907 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.908 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.475 nvme0n1 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.475 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.476 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.763 nvme0n1 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.763 04:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.763 nvme0n1 00:20:44.763 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.763 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.763 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.763 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.763 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.763 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.023 nvme0n1 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.023 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.283 nvme0n1 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.283 nvme0n1 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.283 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.542 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.542 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.543 nvme0n1 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.543 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.802 04:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.802 nvme0n1 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.802 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.803 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.803 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.803 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.061 nvme0n1 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:46.061 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.062 nvme0n1 00:20:46.062 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.321 nvme0n1 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.321 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.581 nvme0n1 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:46.581 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.840 04:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.840 nvme0n1 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:46.840 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.841 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.841 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:46.841 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:46.841 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.841 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.841 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.841 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.099 nvme0n1 00:20:47.099 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.100 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.359 nvme0n1 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.359 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.618 nvme0n1 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.618 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.619 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.877 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.877 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:47.877 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.877 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.878 04:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.136 nvme0n1 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.136 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.395 nvme0n1 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:48.395 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.654 04:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.912 nvme0n1 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:48.912 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.913 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.171 nvme0n1 00:20:49.171 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.172 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.739 nvme0n1 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjQxYTAwYjE2ZGI4ZmU1ODhhMjMxZmMzNzVmMzQ4ZmaFWM1v: 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDBmNDI3NWExMDFkMjI2NjY5ZWNlODlmNmEzOWYxOTRmZTYxNjRlYjEzOGFjM2M5ZmVmOGZkMjdiY2I0ODhhODLOYx4=: 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.739 04:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.307 nvme0n1 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.307 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.874 nvme0n1 00:20:50.874 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.874 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.874 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.874 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.874 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.874 04:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JjY2ZjZWIzZGVkNmFlNWNmM2Y0YjAxOTBjODY4MzBNu+Mx: 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: ]] 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWQ2MWI4ZTU2NThiMDAwOTQ4OGNhODkwOTAxOGVhNGY5v1uO: 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.874 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.441 nvme0n1 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZmZjRjMzczOGZiNWQ3ZGJjMWU2ZDMxMjE5MGQwZmJkN2U3Zjk2ODY5MzRhYTlmQsWmSg==: 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMzMzEwMjYwOWZjMjBlNDQzZmVhNGE5NDg4NDYzZjkKcF5M: 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.441 04:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.030 nvme0n1 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDJlMDM3ZjkxMTE3YmFjYzI1ZmQyMjI0YjNjN2Y3ZGJiZTc5MmEzNDMyOGQyYjY2OWI3NTE5M2ViNzU5ZjY4YljiTOc=: 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.030 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.031 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:52.031 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.031 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.598 nvme0n1 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcxNDNiNzkzZjY2NjQyMjNjMWY5NTA4YzQ1Y2IxMTQyOTlhYWQ4OGFkMzE4YTg2IQrQEg==: 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTcxZTYzZmI3NTA2NzhlZGE0OGYxMTg4MTlhMzRhYTIyNmM4MmM2YjZiNWRkNzAzkxHM5Q==: 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.598 request: 00:20:52.598 { 00:20:52.598 "name": "nvme0", 00:20:52.598 "trtype": "tcp", 00:20:52.598 "traddr": "10.0.0.1", 00:20:52.598 "adrfam": "ipv4", 00:20:52.598 "trsvcid": "4420", 00:20:52.598 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:52.598 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:52.598 "prchk_reftag": false, 00:20:52.598 "prchk_guard": false, 00:20:52.598 "hdgst": false, 00:20:52.598 "ddgst": false, 00:20:52.598 "method": "bdev_nvme_attach_controller", 00:20:52.598 "req_id": 1 00:20:52.598 } 00:20:52.598 Got JSON-RPC error response 00:20:52.598 response: 00:20:52.598 { 00:20:52.598 "code": -5, 00:20:52.598 "message": "Input/output error" 00:20:52.598 } 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.598 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.598 request: 00:20:52.598 { 00:20:52.598 "name": "nvme0", 00:20:52.598 "trtype": "tcp", 00:20:52.598 "traddr": "10.0.0.1", 00:20:52.598 "adrfam": "ipv4", 00:20:52.598 "trsvcid": "4420", 00:20:52.598 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:52.598 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:52.598 "prchk_reftag": false, 00:20:52.598 "prchk_guard": false, 00:20:52.598 "hdgst": false, 00:20:52.598 "ddgst": false, 00:20:52.598 "dhchap_key": "key2", 00:20:52.856 "method": "bdev_nvme_attach_controller", 00:20:52.857 "req_id": 1 00:20:52.857 } 00:20:52.857 Got JSON-RPC error response 00:20:52.857 response: 00:20:52.857 { 00:20:52.857 "code": -5, 00:20:52.857 "message": "Input/output error" 00:20:52.857 } 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:52.857 04:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.857 request: 00:20:52.857 { 00:20:52.857 "name": "nvme0", 00:20:52.857 "trtype": "tcp", 00:20:52.857 "traddr": "10.0.0.1", 00:20:52.857 "adrfam": "ipv4", 00:20:52.857 "trsvcid": "4420", 00:20:52.857 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:52.857 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:52.857 "prchk_reftag": false, 00:20:52.857 "prchk_guard": false, 00:20:52.857 "hdgst": false, 00:20:52.857 "ddgst": false, 00:20:52.857 "dhchap_key": "key1", 00:20:52.857 "dhchap_ctrlr_key": "ckey2", 00:20:52.857 "method": "bdev_nvme_attach_controller", 00:20:52.857 "req_id": 1 00:20:52.857 } 00:20:52.857 Got JSON-RPC error response 00:20:52.857 response: 00:20:52.857 { 00:20:52.857 "code": -5, 00:20:52.857 "message": "Input/output error" 00:20:52.857 } 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.857 rmmod nvme_tcp 00:20:52.857 rmmod nvme_fabrics 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 94659 ']' 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 94659 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 94659 ']' 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 94659 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94659 00:20:52.857 killing process with pid 94659 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94659' 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 94659 00:20:52.857 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 94659 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:53.115 04:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:54.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.049 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:54.049 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:54.049 04:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ejc /tmp/spdk.key-null.Pxc /tmp/spdk.key-sha256.Omo /tmp/spdk.key-sha384.Slt /tmp/spdk.key-sha512.vPS /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:54.049 04:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:54.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.308 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:54.308 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:54.308 00:20:54.308 real 0m31.753s 00:20:54.308 user 0m29.552s 00:20:54.308 sys 0m3.494s 00:20:54.308 04:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.308 04:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.308 ************************************ 00:20:54.308 END TEST nvmf_auth_host 00:20:54.308 ************************************ 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.567 ************************************ 00:20:54.567 START TEST nvmf_digest 00:20:54.567 ************************************ 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:54.567 * Looking for test storage... 00:20:54.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.567 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:54.568 Cannot find device "nvmf_tgt_br" 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.568 Cannot find device "nvmf_tgt_br2" 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:54.568 Cannot find device "nvmf_tgt_br" 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:54.568 Cannot find device "nvmf_tgt_br2" 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:54.568 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:54.827 04:16:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:54.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:20:54.827 00:20:54.827 --- 10.0.0.2 ping statistics --- 00:20:54.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.827 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:54.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:54.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:54.827 00:20:54.827 --- 10.0.0.3 ping statistics --- 00:20:54.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.827 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:54.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:54.827 00:20:54.827 --- 10.0.0.1 ping statistics --- 00:20:54.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.827 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.827 ************************************ 00:20:54.827 START TEST nvmf_digest_clean 00:20:54.827 ************************************ 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=96197 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 96197 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 96197 ']' 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.827 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:55.086 [2024-07-23 04:16:48.219021] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:20:55.086 [2024-07-23 04:16:48.219120] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.086 [2024-07-23 04:16:48.342203] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:55.086 [2024-07-23 04:16:48.362861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.344 [2024-07-23 04:16:48.431228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.344 [2024-07-23 04:16:48.431291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.344 [2024-07-23 04:16:48.431306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.344 [2024-07-23 04:16:48.431317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.344 [2024-07-23 04:16:48.431326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.344 [2024-07-23 04:16:48.431374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.344 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:55.344 [2024-07-23 04:16:48.559523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:55.344 null0 00:20:55.345 [2024-07-23 04:16:48.605815] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.345 [2024-07-23 04:16:48.629948] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96222 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96222 /var/tmp/bperf.sock 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 96222 ']' 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.345 04:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:55.604 [2024-07-23 04:16:48.691480] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:20:55.604 [2024-07-23 04:16:48.691566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96222 ] 00:20:55.604 [2024-07-23 04:16:48.813705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:55.604 [2024-07-23 04:16:48.834792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.604 [2024-07-23 04:16:48.913772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.539 04:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.539 04:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:56.539 04:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:56.539 04:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:56.539 04:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:56.797 [2024-07-23 04:16:49.978523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:56.797 04:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:56.797 04:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.055 nvme0n1 00:20:57.055 04:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:57.055 04:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:57.314 Running I/O for 2 seconds... 00:20:59.234 00:20:59.234 Latency(us) 00:20:59.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.234 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:59.234 nvme0n1 : 2.01 18316.99 71.55 0.00 0.00 6982.96 6553.60 17158.52 00:20:59.234 =================================================================================================================== 00:20:59.234 Total : 18316.99 71.55 0.00 0.00 6982.96 6553.60 17158.52 00:20:59.234 0 00:20:59.234 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:59.234 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:59.234 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:59.234 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:59.234 | select(.opcode=="crc32c") 00:20:59.234 | "\(.module_name) \(.executed)"' 00:20:59.234 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96222 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 96222 ']' 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 96222 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96222 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:59.500 killing process with pid 96222 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96222' 00:20:59.500 Received shutdown signal, test time was about 2.000000 seconds 00:20:59.500 00:20:59.500 Latency(us) 00:20:59.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.500 =================================================================================================================== 00:20:59.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 96222 00:20:59.500 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 96222 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96285 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96285 /var/tmp/bperf.sock 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 96285 ']' 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.759 04:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:59.759 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:59.759 Zero copy mechanism will not be used. 00:20:59.759 [2024-07-23 04:16:53.042033] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:20:59.759 [2024-07-23 04:16:53.042121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96285 ] 00:21:00.018 [2024-07-23 04:16:53.163869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:00.018 [2024-07-23 04:16:53.179671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.018 [2024-07-23 04:16:53.240620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.955 04:16:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.955 04:16:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:21:00.955 04:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:00.955 04:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:00.955 04:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:00.955 [2024-07-23 04:16:54.284483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:01.213 04:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.213 04:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.472 nvme0n1 00:21:01.472 04:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:01.472 04:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:01.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:01.472 Zero copy mechanism will not be used. 00:21:01.472 Running I/O for 2 seconds... 00:21:03.374 00:21:03.374 Latency(us) 00:21:03.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.374 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:03.374 nvme0n1 : 2.00 8314.99 1039.37 0.00 0.00 1921.15 1750.11 6345.08 00:21:03.374 =================================================================================================================== 00:21:03.374 Total : 8314.99 1039.37 0.00 0.00 1921.15 1750.11 6345.08 00:21:03.374 0 00:21:03.374 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:03.375 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:03.375 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:03.375 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:03.375 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:03.375 | select(.opcode=="crc32c") 00:21:03.375 | "\(.module_name) \(.executed)"' 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96285 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 96285 ']' 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 96285 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.634 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96285 00:21:03.893 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:03.893 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:03.893 killing process with pid 96285 00:21:03.893 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96285' 00:21:03.893 Received shutdown signal, test time was about 2.000000 seconds 00:21:03.893 00:21:03.893 Latency(us) 00:21:03.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.893 =================================================================================================================== 00:21:03.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.893 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 96285 00:21:03.893 04:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 96285 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96344 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96344 /var/tmp/bperf.sock 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 96344 ']' 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.893 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:04.152 [2024-07-23 04:16:57.245479] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:04.152 [2024-07-23 04:16:57.245569] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96344 ] 00:21:04.152 [2024-07-23 04:16:57.367362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:04.152 [2024-07-23 04:16:57.383666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.152 [2024-07-23 04:16:57.439630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.152 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.152 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:21:04.152 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:04.152 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:04.152 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:04.720 [2024-07-23 04:16:57.759827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:04.720 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:04.720 04:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:04.720 nvme0n1 00:21:04.978 04:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:04.978 04:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:04.978 Running I/O for 2 seconds... 00:21:06.881 00:21:06.881 Latency(us) 00:21:06.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.881 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:06.881 nvme0n1 : 2.01 19811.96 77.39 0.00 0.00 6455.84 2055.45 14417.92 00:21:06.881 =================================================================================================================== 00:21:06.881 Total : 19811.96 77.39 0.00 0.00 6455.84 2055.45 14417.92 00:21:06.881 0 00:21:06.881 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:06.881 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:06.881 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:06.881 | select(.opcode=="crc32c") 00:21:06.881 | "\(.module_name) \(.executed)"' 00:21:06.881 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:06.881 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96344 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 96344 ']' 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 96344 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96344 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:07.141 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:07.141 killing process with pid 96344 00:21:07.142 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96344' 00:21:07.142 Received shutdown signal, test time was about 2.000000 seconds 00:21:07.142 00:21:07.142 Latency(us) 00:21:07.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.142 =================================================================================================================== 00:21:07.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.142 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 96344 00:21:07.142 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 96344 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96393 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96393 /var/tmp/bperf.sock 00:21:07.400 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 96393 ']' 00:21:07.401 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:07.401 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:07.401 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:07.401 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.401 04:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:07.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:07.401 Zero copy mechanism will not be used. 00:21:07.401 [2024-07-23 04:17:00.683430] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:07.401 [2024-07-23 04:17:00.683539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96393 ] 00:21:07.660 [2024-07-23 04:17:00.799995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:07.660 [2024-07-23 04:17:00.817483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.660 [2024-07-23 04:17:00.887126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.227 04:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.227 04:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:21:08.227 04:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:08.227 04:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:08.227 04:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:08.485 [2024-07-23 04:17:01.778751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:08.485 04:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:08.485 04:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:08.744 nvme0n1 00:21:09.003 04:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:09.003 04:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:09.003 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:09.003 Zero copy mechanism will not be used. 00:21:09.003 Running I/O for 2 seconds... 00:21:10.905 00:21:10.905 Latency(us) 00:21:10.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.905 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:10.905 nvme0n1 : 2.00 7605.31 950.66 0.00 0.00 2097.72 1616.06 4944.99 00:21:10.905 =================================================================================================================== 00:21:10.905 Total : 7605.31 950.66 0.00 0.00 2097.72 1616.06 4944.99 00:21:10.905 0 00:21:10.905 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:10.905 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:10.905 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:10.905 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:10.905 | select(.opcode=="crc32c") 00:21:10.905 | "\(.module_name) \(.executed)"' 00:21:10.905 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96393 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 96393 ']' 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 96393 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:21:11.164 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.165 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96393 00:21:11.165 killing process with pid 96393 00:21:11.165 Received shutdown signal, test time was about 2.000000 seconds 00:21:11.165 00:21:11.165 Latency(us) 00:21:11.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.165 =================================================================================================================== 00:21:11.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.165 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:11.165 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:11.165 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96393' 00:21:11.165 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 96393 00:21:11.165 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 96393 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 96197 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 96197 ']' 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 96197 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96197 00:21:11.423 killing process with pid 96197 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96197' 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 96197 00:21:11.423 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 96197 00:21:11.682 00:21:11.682 real 0m16.716s 00:21:11.682 user 0m32.152s 00:21:11.682 sys 0m4.758s 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 ************************************ 00:21:11.682 END TEST nvmf_digest_clean 00:21:11.682 ************************************ 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 ************************************ 00:21:11.682 START TEST nvmf_digest_error 00:21:11.682 ************************************ 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=96476 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 96476 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 96476 ']' 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.682 04:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 [2024-07-23 04:17:04.990400] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:11.682 [2024-07-23 04:17:04.990487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.941 [2024-07-23 04:17:05.112960] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:11.941 [2024-07-23 04:17:05.131603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.941 [2024-07-23 04:17:05.193134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.941 [2024-07-23 04:17:05.193194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.941 [2024-07-23 04:17:05.193205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.941 [2024-07-23 04:17:05.193212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.941 [2024-07-23 04:17:05.193219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.941 [2024-07-23 04:17:05.193258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.875 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.876 [2024-07-23 04:17:05.933678] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.876 04:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.876 [2024-07-23 04:17:05.991299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:12.876 null0 00:21:12.876 [2024-07-23 04:17:06.034249] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.876 [2024-07-23 04:17:06.058379] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96508 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96508 /var/tmp/bperf.sock 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 96508 ']' 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:12.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.876 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.876 [2024-07-23 04:17:06.108599] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:12.876 [2024-07-23 04:17:06.108850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96508 ] 00:21:13.135 [2024-07-23 04:17:06.225093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:13.135 [2024-07-23 04:17:06.243113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.135 [2024-07-23 04:17:06.309071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.135 [2024-07-23 04:17:06.365954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:13.135 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.135 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:21:13.135 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:13.135 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:13.393 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:13.393 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.393 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:13.393 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.393 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:13.393 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:13.652 nvme0n1 00:21:13.652 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:13.652 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.652 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:13.652 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.652 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:13.652 04:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:13.911 Running I/O for 2 seconds... 00:21:13.911 [2024-07-23 04:17:07.054702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.054753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.054788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.068669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.068708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.068740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.082737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.082776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.082809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.096702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.096741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.096772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.110751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.110790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.110821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.124983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.125026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.125040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.138714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.138753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.138785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.152691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.152729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.152761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.166532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.166569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.166601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.180386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.180424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.180456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.194220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.194258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.194289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.208372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.208408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.208440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.222336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.222374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.222405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.236505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.236543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.236575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.911 [2024-07-23 04:17:07.250261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:13.911 [2024-07-23 04:17:07.250298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.911 [2024-07-23 04:17:07.250332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.264080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.264118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.264133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.277915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.277951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.277983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.291713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.291749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.291781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.305756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.305792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.305824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.319742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.319779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.319810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.333681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.333718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.333749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.347629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.347666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.347698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.361560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.361597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.361628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.375400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.375437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.375468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.389317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.389354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.389385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.403210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.403249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.403263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.417109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.417148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.417162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.430881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.430926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.430958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.444682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.444719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.444751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.458592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.458629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.458660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.472440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.472478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.472509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.486264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.486299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.486330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.500202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.500239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.500270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.171 [2024-07-23 04:17:07.514080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.171 [2024-07-23 04:17:07.514117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.171 [2024-07-23 04:17:07.514148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.528008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.528044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.541788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.541829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.541861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.555764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.555801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.555832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.569662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.569702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.569734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.583606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.583642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.583673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.597539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.597579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.597610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.611445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.611481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.611513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.625403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.625443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.625474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.639411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.639448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.639480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.653277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.653312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.653343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.667109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.667147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.667162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.680850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.680887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.680969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.694733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.694769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.694801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.708641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.708678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.708710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.722625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.722661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.722692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.736562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.736598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.736630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.750997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.751043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.751059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.449 [2024-07-23 04:17:07.768000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.449 [2024-07-23 04:17:07.768041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.449 [2024-07-23 04:17:07.768057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.783608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.783645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.783675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.799015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.799076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.799091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.813522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.813559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.813591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.827553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.827589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.827620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.841459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.841496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.841527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.855411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.855448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.855479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.869302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.869337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.869368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.883181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.883219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.883234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.896943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.896972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.896984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.910748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.910784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.910815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.924802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.924838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.924869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.944756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.944797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.944827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.958984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.959055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.959072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.975070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.975110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.975126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:07.991089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:07.991128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:07.991144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:08.006226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:08.006264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:08.006295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:08.020999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:08.021037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:08.021069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:08.035966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:08.036003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:08.036017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.714 [2024-07-23 04:17:08.050849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.714 [2024-07-23 04:17:08.050888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.714 [2024-07-23 04:17:08.050953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.065822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.065860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.065892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.080756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.080794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.080825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.095795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.095834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.095865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.110789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.110828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.110860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.126015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.126053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.126068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.141181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.141221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.141252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.155721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.155761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.155793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.169749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.169786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.169818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.183888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.183955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.183988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.197741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.197778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.197809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.211609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.211646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.211676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.225564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.225601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.225633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.239534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.239571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.239601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.253417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.253460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.253491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.267322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.267374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.267405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.281178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.281214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.281246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.294972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.295008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.295060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.974 [2024-07-23 04:17:08.308812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:14.974 [2024-07-23 04:17:08.308849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.974 [2024-07-23 04:17:08.308880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.233 [2024-07-23 04:17:08.322584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.233 [2024-07-23 04:17:08.322621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.233 [2024-07-23 04:17:08.322652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.233 [2024-07-23 04:17:08.336382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.233 [2024-07-23 04:17:08.336419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.233 [2024-07-23 04:17:08.336451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.233 [2024-07-23 04:17:08.350266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.233 [2024-07-23 04:17:08.350306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.233 [2024-07-23 04:17:08.350337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.233 [2024-07-23 04:17:08.364062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.233 [2024-07-23 04:17:08.364099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.364129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.377741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.377781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.377812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.391714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.391751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.391782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.405718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.405755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.405786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.419761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.419798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.419830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.433765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.433802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.433832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.447668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.447705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.447735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.461642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.461680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.461711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.475584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.475620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.475650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.489481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.489518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.489550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.503402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.503439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.503469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.517329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.517370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.517401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.531150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.531188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.531202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.545051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.545086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.545117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.558839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.558876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.558907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-07-23 04:17:08.572577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.234 [2024-07-23 04:17:08.572614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.234 [2024-07-23 04:17:08.572645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.586338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.586374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.586406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.600214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.600251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.600265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.614054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.614090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.614122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.627884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.627967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.627983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.641809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.641847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.641878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.655786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.655824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.655855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.669727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.669764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.669795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.683672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.683709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.683740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.697631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.697668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.697699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.711531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.711568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.711598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.725402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.725438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.725469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.739195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.739234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.739249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.752968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.753003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.753034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.766698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.766738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.766770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.783231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.783273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.783290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.799428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.493 [2024-07-23 04:17:08.799465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.493 [2024-07-23 04:17:08.799497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.493 [2024-07-23 04:17:08.814824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.494 [2024-07-23 04:17:08.814860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.494 [2024-07-23 04:17:08.814891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.494 [2024-07-23 04:17:08.828824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.494 [2024-07-23 04:17:08.828861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.494 [2024-07-23 04:17:08.828892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.842768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.842805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.842836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.862541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.862578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.862609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.876448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.876484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.876515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.890359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.890396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.890427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.904256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.904293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.904323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.918142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.918179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.918211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.932015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.932051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.932084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.945963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.946000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.946014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.959833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.959872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.959904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.974213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.974250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.974281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:08.987996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:08.988033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:08.988066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:09.001911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:09.001952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:09.001983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:09.015791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:09.015829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:09.015860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 [2024-07-23 04:17:09.029779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23175f0) 00:21:15.753 [2024-07-23 04:17:09.029819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:15.753 [2024-07-23 04:17:09.029850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.753 00:21:15.753 Latency(us) 00:21:15.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.753 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:15.753 nvme0n1 : 2.01 17898.33 69.92 0.00 0.00 7146.46 6464.23 26929.34 00:21:15.753 =================================================================================================================== 00:21:15.753 Total : 17898.33 69.92 0.00 0.00 7146.46 6464.23 26929.34 00:21:15.753 0 00:21:15.753 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:15.753 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:15.753 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:15.753 | .driver_specific 00:21:15.753 | .nvme_error 00:21:15.753 | .status_code 00:21:15.753 | .command_transient_transport_error' 00:21:15.753 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96508 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 96508 ']' 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 96508 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96508 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96508' 00:21:16.012 killing process with pid 96508 00:21:16.012 Received shutdown signal, test time was about 2.000000 seconds 00:21:16.012 00:21:16.012 Latency(us) 00:21:16.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.012 =================================================================================================================== 00:21:16.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 96508 00:21:16.012 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 96508 00:21:16.270 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:16.270 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96555 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96555 /var/tmp/bperf.sock 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 96555 ']' 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:16.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.271 04:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:16.271 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:16.271 Zero copy mechanism will not be used. 00:21:16.271 [2024-07-23 04:17:09.571465] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:16.271 [2024-07-23 04:17:09.571550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96555 ] 00:21:16.530 [2024-07-23 04:17:09.693359] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:16.530 [2024-07-23 04:17:09.710734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.530 [2024-07-23 04:17:09.767502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.530 [2024-07-23 04:17:09.823717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:17.464 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:17.722 nvme0n1 00:21:17.722 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:17.722 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.722 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:17.722 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.722 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:17.722 04:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:17.722 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:17.722 Zero copy mechanism will not be used. 00:21:17.722 Running I/O for 2 seconds... 00:21:17.981 [2024-07-23 04:17:11.068889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.981 [2024-07-23 04:17:11.068999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.981 [2024-07-23 04:17:11.069017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.981 [2024-07-23 04:17:11.073363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.981 [2024-07-23 04:17:11.073421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.981 [2024-07-23 04:17:11.073455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.981 [2024-07-23 04:17:11.077730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.981 [2024-07-23 04:17:11.077769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.981 [2024-07-23 04:17:11.077801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.981 [2024-07-23 04:17:11.081908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.981 [2024-07-23 04:17:11.081947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.981 [2024-07-23 04:17:11.081978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.981 [2024-07-23 04:17:11.085953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.085993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.086025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.090115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.090155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.090186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.094217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.094257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.094289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.098281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.098322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.098353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.102351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.102391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.102422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.106413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.106469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.106485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.110703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.110744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.110776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.115077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.115119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.115135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.119365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.119407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.119439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.123397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.123438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.123470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.127505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.127546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.127577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.131696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.131736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.131767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.135904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.135991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.136008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.140067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.140106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.140138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.144267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.144307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.144339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.148371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.148412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.148444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.152523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.152563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.152595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.156768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.156808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.156840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.160948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.160987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.161019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.165106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.165146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.165178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.169273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.169314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.169346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.173353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.173393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.173425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.177478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.177520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.177551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.181593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.181633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.181664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.185795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.185835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.185866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.190027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.190066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.190098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.982 [2024-07-23 04:17:11.194075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.982 [2024-07-23 04:17:11.194113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.982 [2024-07-23 04:17:11.194145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.198122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.198161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.198193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.202265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.202305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.202337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.206263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.206303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.206334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.210337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.210379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.210410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.214499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.214539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.214570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.218650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.218689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.218720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.222765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.222804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.222835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.226881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.226946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.226977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.230843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.230881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.230958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.235063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.235104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.235119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.239127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.239202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.243181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.243223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.243254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.247266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.247308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.247356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.251378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.251418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.251449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.255528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.255569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.255600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.259690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.259730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.259762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.263804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.263844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.263875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.267946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.267985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.268017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.272017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.272056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.272087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.276049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.276088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.276119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.280099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.280138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.280169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.284162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.284201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.284233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.288145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.288184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.288216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.292157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.292197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.292228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.296220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.296259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.296290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.300291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.300332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.983 [2024-07-23 04:17:11.300363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.983 [2024-07-23 04:17:11.304309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.983 [2024-07-23 04:17:11.304349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.984 [2024-07-23 04:17:11.304381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.984 [2024-07-23 04:17:11.308363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.984 [2024-07-23 04:17:11.308403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.984 [2024-07-23 04:17:11.308434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.984 [2024-07-23 04:17:11.312447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.984 [2024-07-23 04:17:11.312487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.984 [2024-07-23 04:17:11.312519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.984 [2024-07-23 04:17:11.316622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.984 [2024-07-23 04:17:11.316663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.984 [2024-07-23 04:17:11.316694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.984 [2024-07-23 04:17:11.320830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:17.984 [2024-07-23 04:17:11.320871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.984 [2024-07-23 04:17:11.320902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.243 [2024-07-23 04:17:11.324981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.243 [2024-07-23 04:17:11.325020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-07-23 04:17:11.325052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.243 [2024-07-23 04:17:11.329081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.329122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.329153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.333310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.333350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.333382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.337395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.337439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.337470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.341520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.341560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.341591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.345699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.345744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.345776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.350068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.350109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.350140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.354221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.354265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.354296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.358288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.358328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.358360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.362433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.362473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.362504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.366580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.366619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.366650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.370692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.370730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.370763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.374810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.374848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.374879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.378853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.378917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.378950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.383077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.383118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.383133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.387186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.387228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.387260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.391221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.391263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.391295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.395283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.395339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.395370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.399299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.399354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.399385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.403497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.403537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.403569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.407701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.407741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.407772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.411947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.411986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.412017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.416069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.416107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.416138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.420225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.420265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.420297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.424301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.424341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.424372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.428417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.428457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.428488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.432611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.432651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-07-23 04:17:11.432683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.244 [2024-07-23 04:17:11.436770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.244 [2024-07-23 04:17:11.436811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.436843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.440857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.440922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.440938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.445009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.445048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.445080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.449147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.449188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.449219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.453241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.453280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.453312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.457393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.457432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.457464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.461511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.461551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.461583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.465728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.465770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.465802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.469774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.469814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.469846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.473912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.473950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.473982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.477991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.478031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.478062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.482072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.482112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.482143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.486212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.486251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.486282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.490309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.490353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.490385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.494276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.494316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.494347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.498327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.498366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.498398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.502343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.502382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.502414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.506295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.506335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.506365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.510311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.510351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.510383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.514333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.514377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.514409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.518320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.518360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.518391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.522302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.522345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.522376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.526373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.526413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.526445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.530474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.530517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.530549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.534545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.534583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.534615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.538628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.538665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.538696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.542818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.542858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.245 [2024-07-23 04:17:11.542890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.245 [2024-07-23 04:17:11.546861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.245 [2024-07-23 04:17:11.546924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.546957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.550983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.551056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.551073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.555065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.555106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.555121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.559158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.559200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.559232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.563215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.563272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.563288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.567298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.567371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.567403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.571886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.571989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.572006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.576482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.576523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.576556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.580950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.580991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.581022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.246 [2024-07-23 04:17:11.585765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.246 [2024-07-23 04:17:11.585807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.246 [2024-07-23 04:17:11.585839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.505 [2024-07-23 04:17:11.590436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.505 [2024-07-23 04:17:11.590494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.505 [2024-07-23 04:17:11.590511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.505 [2024-07-23 04:17:11.595200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.505 [2024-07-23 04:17:11.595246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.505 [2024-07-23 04:17:11.595263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.505 [2024-07-23 04:17:11.600183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.505 [2024-07-23 04:17:11.600229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.505 [2024-07-23 04:17:11.600246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.505 [2024-07-23 04:17:11.605578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.505 [2024-07-23 04:17:11.605814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.606086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.610949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.611219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.611437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.616306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.616570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.616808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.621600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.621831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.622022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.626399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.626648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.626910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.631520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.631563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.631595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.635857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.635946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.635963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.640208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.640249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.640281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.644499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.644542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.644574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.649078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.649120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.649136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.653363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.653404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.653436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.657721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.657764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.657797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.662234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.662280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.662343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.666621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.666662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.666695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.670969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.671008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.671065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.675352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.675412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.675444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.680032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.680074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.680107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.684389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.684431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.684463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.688637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.688680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.688712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.692993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.693035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.693067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.697426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.697466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.697498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.701708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.701750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.701783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.706072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.706112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.706145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.710672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.710711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.710744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.714983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.715068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.715085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.719273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.506 [2024-07-23 04:17:11.719316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.506 [2024-07-23 04:17:11.719366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.506 [2024-07-23 04:17:11.723483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.723524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.723556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.727977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.728019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.728051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.732363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.732404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.732437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.736637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.736679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.736711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.741247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.741305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.741337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.745494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.745536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.745568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.749763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.749804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.749837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.754078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.754118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.754151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.758505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.758543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.758575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.762791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.762830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.762862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.767171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.767214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.767229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.771306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.771362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.771393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.775426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.775466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.775497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.779712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.779753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.779785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.783918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.783989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.784021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.788080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.788120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.788151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.792263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.792303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.792334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.796513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.796585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.796600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.800966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.801007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.801022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.805182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.805224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.805256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.809421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.809462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.809494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.813660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.813702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.813734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.818033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.818073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.818105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.822212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.822253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.822285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.826426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.826470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.507 [2024-07-23 04:17:11.826506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.507 [2024-07-23 04:17:11.831170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.507 [2024-07-23 04:17:11.831215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.508 [2024-07-23 04:17:11.831231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.508 [2024-07-23 04:17:11.835768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.508 [2024-07-23 04:17:11.835811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.508 [2024-07-23 04:17:11.835843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.508 [2024-07-23 04:17:11.840615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.508 [2024-07-23 04:17:11.840656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.508 [2024-07-23 04:17:11.840684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.508 [2024-07-23 04:17:11.845627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.508 [2024-07-23 04:17:11.845663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.508 [2024-07-23 04:17:11.845692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.850428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.767 [2024-07-23 04:17:11.850467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.767 [2024-07-23 04:17:11.850499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.854974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.767 [2024-07-23 04:17:11.855060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.767 [2024-07-23 04:17:11.855077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.859670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.767 [2024-07-23 04:17:11.859710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.767 [2024-07-23 04:17:11.859741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.864219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.767 [2024-07-23 04:17:11.864261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.767 [2024-07-23 04:17:11.864308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.868594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.767 [2024-07-23 04:17:11.868632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.767 [2024-07-23 04:17:11.868663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.873132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.767 [2024-07-23 04:17:11.873172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.767 [2024-07-23 04:17:11.873187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.877486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.767 [2024-07-23 04:17:11.877524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.767 [2024-07-23 04:17:11.877556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.881805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.767 [2024-07-23 04:17:11.881845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.767 [2024-07-23 04:17:11.881877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.767 [2024-07-23 04:17:11.886034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.886074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.886106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.890165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.890204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.890235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.894230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.894268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.894300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.898257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.898297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.898328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.902567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.902607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.902639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.907009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.907074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.907107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.911446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.911502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.911534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.915763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.915804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.915836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.920133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.920174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.920205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.924337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.924378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.924410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.928655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.928708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.928739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.933067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.933106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.933138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.937151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.937190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.937221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.941170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.941213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.941244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.945399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.945439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.945470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.949524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.949565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.949596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.953560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.953600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.953631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.957757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.957796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.957828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.961925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.961968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.962000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.966000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.966042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.966073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.970140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.970210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.974182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.974225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.974256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.978226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.978265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.978296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.982262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.982301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.982333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.986333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.986372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.986403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.990404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.990443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.768 [2024-07-23 04:17:11.990475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.768 [2024-07-23 04:17:11.994556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.768 [2024-07-23 04:17:11.994594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:11.994625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:11.998572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:11.998614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:11.998645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.002658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.002696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.002727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.006769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.006807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.006838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.010879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.010962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.010994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.015007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.015059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.015090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.019080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.019120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.019135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.023143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.023184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.023216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.027216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.027258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.027273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.031279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.031336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.031368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.035358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.035398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.035430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.039441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.039479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.039511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.043649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.043689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.043720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.047845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.047884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.047926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.052049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.052088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.052119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.056146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.056185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.056216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.060197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.060235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.060267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.064303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.064342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.064373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.068397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.068436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.068467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.072633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.072674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.072705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.077053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.077092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.077107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.081315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.081353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.081384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.085561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.085601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.085633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.089753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.089794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.089825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.093842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.093925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.093941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.098228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.098300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.098315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.769 [2024-07-23 04:17:12.102394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.769 [2024-07-23 04:17:12.102434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.769 [2024-07-23 04:17:12.102466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.770 [2024-07-23 04:17:12.106764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:18.770 [2024-07-23 04:17:12.106803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.770 [2024-07-23 04:17:12.106835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.111015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.111081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.111096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.115130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.115173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.115204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.119358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.119399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.119431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.123572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.123612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.123644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.127775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.127816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.127849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.132134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.132174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.132205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.136403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.136445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.136477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.140573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.140613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.140644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.144764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.144805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.144836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.148976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.149015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.149046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.153112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.153151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.153182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.157148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.157187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.157219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.161338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.161378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.161409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.165454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.165495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.165526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.169536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.169576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.169607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.173685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.173726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.173758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.177872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.177942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.177974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.182048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.182086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.182118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.186208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.186249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.186298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.190211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.190251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.190283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.194262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.194301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.030 [2024-07-23 04:17:12.194332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.030 [2024-07-23 04:17:12.198298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.030 [2024-07-23 04:17:12.198339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.198371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.202259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.202303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.202334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.206286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.206326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.206357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.210354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.210393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.210424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.214433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.214472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.214504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.218528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.218566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.218598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.222552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.222591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.222622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.226743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.226786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.226818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.230857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.230908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.230941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.234853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.234933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.234949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.238947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.238986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.239017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.242948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.242986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.243018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.246964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.247003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.247045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.250956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.250994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.251042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.254987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.255051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.255068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.259070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.259110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.259124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.263117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.263158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.263190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.267220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.267263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.267295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.271331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.271371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.271403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.275330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.275370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.275401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.279341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.279404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.279436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.283453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.283493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.283524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.287608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.287648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.287679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.291744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.291784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.291816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.295920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.295993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.296024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.300153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.300193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.300224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.304173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.031 [2024-07-23 04:17:12.304211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.031 [2024-07-23 04:17:12.304243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.031 [2024-07-23 04:17:12.308194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.308233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.308265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.312204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.312243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.312274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.316297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.316336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.316367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.320374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.320413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.320444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.324412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.324452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.324483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.328508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.328548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.328579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.332711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.332751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.332782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.337028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.337070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.337102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.341144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.341184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.341215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.345209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.345247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.345279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.349250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.349289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.349321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.353214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.353254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.353286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.357366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.357410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.357441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.361454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.361494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.361525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.365497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.365537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.365568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.032 [2024-07-23 04:17:12.369589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.032 [2024-07-23 04:17:12.369629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.032 [2024-07-23 04:17:12.369660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.292 [2024-07-23 04:17:12.373753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.292 [2024-07-23 04:17:12.373792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.292 [2024-07-23 04:17:12.373823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.292 [2024-07-23 04:17:12.377938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.292 [2024-07-23 04:17:12.377982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.292 [2024-07-23 04:17:12.378014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.292 [2024-07-23 04:17:12.382050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.292 [2024-07-23 04:17:12.382089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.292 [2024-07-23 04:17:12.382120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.292 [2024-07-23 04:17:12.386161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.292 [2024-07-23 04:17:12.386204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.292 [2024-07-23 04:17:12.386235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.292 [2024-07-23 04:17:12.390168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.292 [2024-07-23 04:17:12.390211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.292 [2024-07-23 04:17:12.390242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.292 [2024-07-23 04:17:12.394191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.292 [2024-07-23 04:17:12.394235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.292 [2024-07-23 04:17:12.394266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.292 [2024-07-23 04:17:12.398199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.292 [2024-07-23 04:17:12.398243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.292 [2024-07-23 04:17:12.398275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.292 [2024-07-23 04:17:12.402289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.292 [2024-07-23 04:17:12.402328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.402359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.406372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.406416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.406447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.410548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.410587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.410618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.414618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.414659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.414691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.418670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.418709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.418741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.422691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.422728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.422760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.426785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.426827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.426859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.430922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.430959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.430990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.435150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.435192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.435223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.439214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.439257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.439271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.443255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.443297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.443345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.447361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.447402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.447433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.451519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.451558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.451590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.455648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.455687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.455718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.459845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.459885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.459950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.464022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.464061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.464092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.468037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.468075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.468106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.472050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.472088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.472120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.476148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.476187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.476218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.480214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.480253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.480285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.484268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.484306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.484338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.488270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.488309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.488341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.492311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.492351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.492383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.496347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.496387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.496418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.500444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.500484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.500516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.504549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.504593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.293 [2024-07-23 04:17:12.504624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.293 [2024-07-23 04:17:12.508686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.293 [2024-07-23 04:17:12.508726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.508758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.512905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.512943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.512973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.516964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.517003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.517034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.521072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.521113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.521145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.525141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.525180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.525211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.529235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.529275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.529306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.533294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.533333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.533365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.537387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.537426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.537458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.541452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.541492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.541524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.545569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.545609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.545640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.549732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.549772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.549804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.553910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.553951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.553982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.558029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.558072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.558103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.562059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.562099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.562130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.566257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.566297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.566328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.570259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.570303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.570334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.574321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.574360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.574391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.578361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.578403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.578435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.582463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.582506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.582539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.586572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.586611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.586642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.590656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.590694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.590725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.594776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.594814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.594846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.598961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.599000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.599059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.603142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.603184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.603199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.607136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.607177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.607208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.611448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.611489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.611522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.615542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.615582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.294 [2024-07-23 04:17:12.615613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.294 [2024-07-23 04:17:12.619694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.294 [2024-07-23 04:17:12.619734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.295 [2024-07-23 04:17:12.619765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.295 [2024-07-23 04:17:12.623864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.295 [2024-07-23 04:17:12.623934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.295 [2024-07-23 04:17:12.623968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.295 [2024-07-23 04:17:12.628013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.295 [2024-07-23 04:17:12.628052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.295 [2024-07-23 04:17:12.628083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.295 [2024-07-23 04:17:12.632133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.295 [2024-07-23 04:17:12.632173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.295 [2024-07-23 04:17:12.632205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.554 [2024-07-23 04:17:12.636234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.554 [2024-07-23 04:17:12.636274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.554 [2024-07-23 04:17:12.636322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.554 [2024-07-23 04:17:12.640339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.554 [2024-07-23 04:17:12.640380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.640411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.644394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.644434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.644465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.648479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.648518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.648549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.652550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.652595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.652627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.656730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.656770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.656801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.660879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.660954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.660985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.665076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.665116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.665147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.669168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.669208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.669240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.673227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.673266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.677276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.677319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.677351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.681344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.681387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.681419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.685459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.685503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.685535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.689574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.689614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.689646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.693701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.693745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.693776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.697986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.698030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.698061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.702028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.702068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.702099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.706189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.706228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.706260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.710135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.710178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.710209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.714167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.714206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.714238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.718163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.718207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.718238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.722177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.722221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.722253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.726128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.726170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.726202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.730249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.730290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.730321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.734206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.734245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.734276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.738318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.738358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.738389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.742360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.742403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.555 [2024-07-23 04:17:12.742435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.555 [2024-07-23 04:17:12.746432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.555 [2024-07-23 04:17:12.746470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.746502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.750446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.750490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.750521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.754522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.754566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.754598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.758775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.758814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.758846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.763363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.763404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.763436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.767758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.767799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.767831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.772460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.772501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.772516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.777102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.777145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.777174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.781875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.781979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.781996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.786556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.786599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.786630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.791167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.791211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.791244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.795773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.795814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.795845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.800323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.800364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.800395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.804896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.805019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.805037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.809563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.809605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.809637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.813812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.813853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.813884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.818172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.818216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.818247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.822426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.822467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.822499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.826954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.826993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.827050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.831158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.831200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.831232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.835457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.835497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.835529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.839828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.839870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.839902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.844363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.844401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.844433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.848627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.848699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.848716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.853406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.853462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.853477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.858186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.858231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.858248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.863113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.556 [2024-07-23 04:17:12.863157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.556 [2024-07-23 04:17:12.863174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.556 [2024-07-23 04:17:12.868094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.557 [2024-07-23 04:17:12.868138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-23 04:17:12.868155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.557 [2024-07-23 04:17:12.872814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.557 [2024-07-23 04:17:12.872853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-23 04:17:12.872885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.557 [2024-07-23 04:17:12.877555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.557 [2024-07-23 04:17:12.877594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-23 04:17:12.877626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.557 [2024-07-23 04:17:12.882415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.557 [2024-07-23 04:17:12.882454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-23 04:17:12.882486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.557 [2024-07-23 04:17:12.886842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.557 [2024-07-23 04:17:12.886881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-23 04:17:12.886959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.557 [2024-07-23 04:17:12.891461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.557 [2024-07-23 04:17:12.891502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-23 04:17:12.891534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.557 [2024-07-23 04:17:12.895755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.557 [2024-07-23 04:17:12.895796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.557 [2024-07-23 04:17:12.895828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.816 [2024-07-23 04:17:12.900109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.816 [2024-07-23 04:17:12.900167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.816 [2024-07-23 04:17:12.900183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.816 [2024-07-23 04:17:12.904434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.816 [2024-07-23 04:17:12.904476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.816 [2024-07-23 04:17:12.904508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.816 [2024-07-23 04:17:12.908717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.816 [2024-07-23 04:17:12.908757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.816 [2024-07-23 04:17:12.908790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.816 [2024-07-23 04:17:12.913080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.816 [2024-07-23 04:17:12.913120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.816 [2024-07-23 04:17:12.913153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.816 [2024-07-23 04:17:12.917590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.816 [2024-07-23 04:17:12.917628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.816 [2024-07-23 04:17:12.917661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.816 [2024-07-23 04:17:12.922000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.816 [2024-07-23 04:17:12.922040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.816 [2024-07-23 04:17:12.922073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.816 [2024-07-23 04:17:12.926188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.926229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.926261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.930443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.930484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.930517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.934813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.934852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.934885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.939181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.939224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.939240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.943429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.943469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.943501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.947929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.948016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.948034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.952220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.952261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.952294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.956442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.956483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.956515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.960796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.960837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.960869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.965356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.965397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.965429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.969782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.969821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.969852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.974311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.974351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.974384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.978379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.978423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.978454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.982479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.982519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.982551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.986607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.986647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.986678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.990726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.990764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.990795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.994924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.994965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.994997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:12.999069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:12.999108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:12.999140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.003245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.003291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:13.003307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.007401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.007441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:13.007472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.011548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.011588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:13.011619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.015686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.015726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:13.015757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.019842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.019883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:13.019964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.024002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.024042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:13.024074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.028208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.028249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:13.028281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.032275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.032330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.817 [2024-07-23 04:17:13.032361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.817 [2024-07-23 04:17:13.036421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.817 [2024-07-23 04:17:13.036460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.818 [2024-07-23 04:17:13.036492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.818 [2024-07-23 04:17:13.040547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.818 [2024-07-23 04:17:13.040586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.818 [2024-07-23 04:17:13.040618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.818 [2024-07-23 04:17:13.044704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.818 [2024-07-23 04:17:13.044743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.818 [2024-07-23 04:17:13.044776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.818 [2024-07-23 04:17:13.048885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.818 [2024-07-23 04:17:13.048957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.818 [2024-07-23 04:17:13.048990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.818 [2024-07-23 04:17:13.053137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.818 [2024-07-23 04:17:13.053176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.818 [2024-07-23 04:17:13.053207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.818 [2024-07-23 04:17:13.057188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.818 [2024-07-23 04:17:13.057226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.818 [2024-07-23 04:17:13.057258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.818 [2024-07-23 04:17:13.061348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.818 [2024-07-23 04:17:13.061388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.818 [2024-07-23 04:17:13.061419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.818 [2024-07-23 04:17:13.065421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a540) 00:21:19.818 [2024-07-23 04:17:13.065461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.818 [2024-07-23 04:17:13.065492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.818 00:21:19.818 Latency(us) 00:21:19.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.818 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:19.818 nvme0n1 : 2.00 7353.98 919.25 0.00 0.00 2172.41 1772.45 5659.93 00:21:19.818 =================================================================================================================== 00:21:19.818 Total : 7353.98 919.25 0.00 0.00 2172.41 1772.45 5659.93 00:21:19.818 0 00:21:19.818 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:19.818 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:19.818 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:19.818 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:19.818 | .driver_specific 00:21:19.818 | .nvme_error 00:21:19.818 | .status_code 00:21:19.818 | .command_transient_transport_error' 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 475 > 0 )) 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96555 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 96555 ']' 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 96555 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96555 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96555' 00:21:20.076 killing process with pid 96555 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 96555 00:21:20.076 Received shutdown signal, test time was about 2.000000 seconds 00:21:20.076 00:21:20.076 Latency(us) 00:21:20.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.076 =================================================================================================================== 00:21:20.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.076 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 96555 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96614 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96614 /var/tmp/bperf.sock 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 96614 ']' 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:20.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.335 04:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:20.335 [2024-07-23 04:17:13.628511] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:20.335 [2024-07-23 04:17:13.628787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96614 ] 00:21:20.595 [2024-07-23 04:17:13.751136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:20.595 [2024-07-23 04:17:13.767805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.595 [2024-07-23 04:17:13.832677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.595 [2024-07-23 04:17:13.886631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:21.189 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.189 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:21:21.189 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:21.189 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:21.448 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:21.448 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.448 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:21.448 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.448 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:21.448 04:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:21.706 nvme0n1 00:21:21.706 04:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:21.706 04:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.706 04:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:21.706 04:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.706 04:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:21.706 04:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:21.965 Running I/O for 2 seconds... 00:21:21.965 [2024-07-23 04:17:15.144581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fef90 00:21:21.965 [2024-07-23 04:17:15.146861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.146928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.158455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190feb58 00:21:21.965 [2024-07-23 04:17:15.160774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.160809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.171991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fe2e8 00:21:21.965 [2024-07-23 04:17:15.174273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.174311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.186770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fda78 00:21:21.965 [2024-07-23 04:17:15.189418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.189459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.202205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fd208 00:21:21.965 [2024-07-23 04:17:15.204605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.204642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.216603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fc998 00:21:21.965 [2024-07-23 04:17:15.218903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.218969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.230621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fc128 00:21:21.965 [2024-07-23 04:17:15.232956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.233051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.244582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fb8b8 00:21:21.965 [2024-07-23 04:17:15.246803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.246837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.258654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fb048 00:21:21.965 [2024-07-23 04:17:15.260882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.260943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.272772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fa7d8 00:21:21.965 [2024-07-23 04:17:15.274968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.275003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.286780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f9f68 00:21:21.965 [2024-07-23 04:17:15.288954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.288990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:21.965 [2024-07-23 04:17:15.301009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f96f8 00:21:21.965 [2024-07-23 04:17:15.303125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.965 [2024-07-23 04:17:15.303176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.224 [2024-07-23 04:17:15.315042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f8e88 00:21:22.224 [2024-07-23 04:17:15.317119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.224 [2024-07-23 04:17:15.317155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.224 [2024-07-23 04:17:15.328940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f8618 00:21:22.224 [2024-07-23 04:17:15.331001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.224 [2024-07-23 04:17:15.331062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.224 [2024-07-23 04:17:15.342835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f7da8 00:21:22.224 [2024-07-23 04:17:15.344920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.344955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.356848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f7538 00:21:22.225 [2024-07-23 04:17:15.358937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.358973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.370648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f6cc8 00:21:22.225 [2024-07-23 04:17:15.372721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.372754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.384128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f6458 00:21:22.225 [2024-07-23 04:17:15.386068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.386101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.397431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f5be8 00:21:22.225 [2024-07-23 04:17:15.399451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.399486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.410672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f5378 00:21:22.225 [2024-07-23 04:17:15.412651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.412684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.424093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f4b08 00:21:22.225 [2024-07-23 04:17:15.426088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.426127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.437340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f4298 00:21:22.225 [2024-07-23 04:17:15.439241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.439279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.450488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f3a28 00:21:22.225 [2024-07-23 04:17:15.452401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.452435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.463749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f31b8 00:21:22.225 [2024-07-23 04:17:15.465622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.465654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.477042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f2948 00:21:22.225 [2024-07-23 04:17:15.478868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.478927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.490128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f20d8 00:21:22.225 [2024-07-23 04:17:15.491969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.492003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.503404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f1868 00:21:22.225 [2024-07-23 04:17:15.505161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.505195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.516640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f0ff8 00:21:22.225 [2024-07-23 04:17:15.518489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.518523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.529977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f0788 00:21:22.225 [2024-07-23 04:17:15.531806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.531843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.543394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190eff18 00:21:22.225 [2024-07-23 04:17:15.545135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.545170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.225 [2024-07-23 04:17:15.556586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ef6a8 00:21:22.225 [2024-07-23 04:17:15.558367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.225 [2024-07-23 04:17:15.558416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.569822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190eee38 00:21:22.484 [2024-07-23 04:17:15.571664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.571700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.583135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ee5c8 00:21:22.484 [2024-07-23 04:17:15.584814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.584847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.596297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190edd58 00:21:22.484 [2024-07-23 04:17:15.597959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.598001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.609384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ed4e8 00:21:22.484 [2024-07-23 04:17:15.611101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.611136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.622778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ecc78 00:21:22.484 [2024-07-23 04:17:15.624489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.624523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.636034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ec408 00:21:22.484 [2024-07-23 04:17:15.637633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.637666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.649156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ebb98 00:21:22.484 [2024-07-23 04:17:15.650784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.650817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.662545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190eb328 00:21:22.484 [2024-07-23 04:17:15.664172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.664207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.675699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190eaab8 00:21:22.484 [2024-07-23 04:17:15.677291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.484 [2024-07-23 04:17:15.677325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.484 [2024-07-23 04:17:15.688933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ea248 00:21:22.485 [2024-07-23 04:17:15.690529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.690563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.702143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e99d8 00:21:22.485 [2024-07-23 04:17:15.703735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.703771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.715438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e9168 00:21:22.485 [2024-07-23 04:17:15.716946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.716987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.728579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e88f8 00:21:22.485 [2024-07-23 04:17:15.730128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.730162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.742125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e8088 00:21:22.485 [2024-07-23 04:17:15.743674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.743709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.755331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e7818 00:21:22.485 [2024-07-23 04:17:15.756811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.756845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.768511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e6fa8 00:21:22.485 [2024-07-23 04:17:15.770008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.770042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.781612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e6738 00:21:22.485 [2024-07-23 04:17:15.783142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.783180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.794781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e5ec8 00:21:22.485 [2024-07-23 04:17:15.796352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.796402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.808060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e5658 00:21:22.485 [2024-07-23 04:17:15.809531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.809565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.485 [2024-07-23 04:17:15.821259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e4de8 00:21:22.485 [2024-07-23 04:17:15.822698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.485 [2024-07-23 04:17:15.822731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.834591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e4578 00:21:22.744 [2024-07-23 04:17:15.836043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.836077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.847758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e3d08 00:21:22.744 [2024-07-23 04:17:15.849180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.849213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.860941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e3498 00:21:22.744 [2024-07-23 04:17:15.862353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.862386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.874051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e2c28 00:21:22.744 [2024-07-23 04:17:15.875461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.875496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.887295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e23b8 00:21:22.744 [2024-07-23 04:17:15.888623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.888656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.900652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e1b48 00:21:22.744 [2024-07-23 04:17:15.902236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.902273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.915551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e12d8 00:21:22.744 [2024-07-23 04:17:15.917048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.917087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.931443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e0a68 00:21:22.744 [2024-07-23 04:17:15.932743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.932777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.946269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e01f8 00:21:22.744 [2024-07-23 04:17:15.947665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.947701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.960124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190df988 00:21:22.744 [2024-07-23 04:17:15.961435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.961468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.973626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190df118 00:21:22.744 [2024-07-23 04:17:15.974859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.974920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:15.986810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190de8a8 00:21:22.744 [2024-07-23 04:17:15.988159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:15.988193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:16.000007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190de038 00:21:22.744 [2024-07-23 04:17:16.001225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:16.001259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:16.018385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190de038 00:21:22.744 [2024-07-23 04:17:16.020626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:16.020662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:16.031650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190de8a8 00:21:22.744 [2024-07-23 04:17:16.033862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:16.033923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:16.045028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190df118 00:21:22.744 [2024-07-23 04:17:16.047181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:16.047219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:16.058116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190df988 00:21:22.744 [2024-07-23 04:17:16.060332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:16.060369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:16.071336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e01f8 00:21:22.744 [2024-07-23 04:17:16.073491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:16.073524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.744 [2024-07-23 04:17:16.084660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e0a68 00:21:22.744 [2024-07-23 04:17:16.086784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.744 [2024-07-23 04:17:16.086818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:23.003 [2024-07-23 04:17:16.097927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e12d8 00:21:23.003 [2024-07-23 04:17:16.100053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.003 [2024-07-23 04:17:16.100090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:23.003 [2024-07-23 04:17:16.111006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e1b48 00:21:23.003 [2024-07-23 04:17:16.113137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.003 [2024-07-23 04:17:16.113170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.124220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e23b8 00:21:23.004 [2024-07-23 04:17:16.126262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.126295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.137364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e2c28 00:21:23.004 [2024-07-23 04:17:16.139454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.139489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.150646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e3498 00:21:23.004 [2024-07-23 04:17:16.152957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.152999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.164398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e3d08 00:21:23.004 [2024-07-23 04:17:16.166382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.166416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.177458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e4578 00:21:23.004 [2024-07-23 04:17:16.179510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.179544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.190689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e4de8 00:21:23.004 [2024-07-23 04:17:16.192802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.192835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.204589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e5658 00:21:23.004 [2024-07-23 04:17:16.206632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.206665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.218051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e5ec8 00:21:23.004 [2024-07-23 04:17:16.220041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.220078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.231163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e6738 00:21:23.004 [2024-07-23 04:17:16.233127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.233161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.244371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e6fa8 00:21:23.004 [2024-07-23 04:17:16.246259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.246292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.257527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e7818 00:21:23.004 [2024-07-23 04:17:16.259496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.259533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.270619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e8088 00:21:23.004 [2024-07-23 04:17:16.272597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.272630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.283824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e88f8 00:21:23.004 [2024-07-23 04:17:16.285762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.285796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.297095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e9168 00:21:23.004 [2024-07-23 04:17:16.298918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.298959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.310167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190e99d8 00:21:23.004 [2024-07-23 04:17:16.312045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.312080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.323303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ea248 00:21:23.004 [2024-07-23 04:17:16.325164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.325197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:23.004 [2024-07-23 04:17:16.336463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190eaab8 00:21:23.004 [2024-07-23 04:17:16.338248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.004 [2024-07-23 04:17:16.338281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:23.263 [2024-07-23 04:17:16.349518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190eb328 00:21:23.263 [2024-07-23 04:17:16.351349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.263 [2024-07-23 04:17:16.351416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:23.263 [2024-07-23 04:17:16.363515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ebb98 00:21:23.263 [2024-07-23 04:17:16.365437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.263 [2024-07-23 04:17:16.365471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:23.263 [2024-07-23 04:17:16.378403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ec408 00:21:23.263 [2024-07-23 04:17:16.380370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.380408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.393643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ecc78 00:21:23.264 [2024-07-23 04:17:16.395630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.395665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.408492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ed4e8 00:21:23.264 [2024-07-23 04:17:16.410378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.410413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.422850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190edd58 00:21:23.264 [2024-07-23 04:17:16.424717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.424751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.437063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ee5c8 00:21:23.264 [2024-07-23 04:17:16.438822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.438855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.451230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190eee38 00:21:23.264 [2024-07-23 04:17:16.453015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.453049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.465374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190ef6a8 00:21:23.264 [2024-07-23 04:17:16.467111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.467149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.479178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190eff18 00:21:23.264 [2024-07-23 04:17:16.481060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.481103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.493156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f0788 00:21:23.264 [2024-07-23 04:17:16.494865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.494910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.507316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f0ff8 00:21:23.264 [2024-07-23 04:17:16.508984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.509018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.521256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f1868 00:21:23.264 [2024-07-23 04:17:16.522932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.522994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.535336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f20d8 00:21:23.264 [2024-07-23 04:17:16.536988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.537022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.549426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f2948 00:21:23.264 [2024-07-23 04:17:16.551094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.551132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.562878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f31b8 00:21:23.264 [2024-07-23 04:17:16.564532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.564568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.576052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f3a28 00:21:23.264 [2024-07-23 04:17:16.577575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.577610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.589307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f4298 00:21:23.264 [2024-07-23 04:17:16.590822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.590855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:23.264 [2024-07-23 04:17:16.602369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f4b08 00:21:23.264 [2024-07-23 04:17:16.603892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.264 [2024-07-23 04:17:16.603960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.615449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f5378 00:21:23.523 [2024-07-23 04:17:16.616925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.523 [2024-07-23 04:17:16.616981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.628382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f5be8 00:21:23.523 [2024-07-23 04:17:16.629843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.523 [2024-07-23 04:17:16.629876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.641471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f6458 00:21:23.523 [2024-07-23 04:17:16.642941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.523 [2024-07-23 04:17:16.643002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.654458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f6cc8 00:21:23.523 [2024-07-23 04:17:16.655942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.523 [2024-07-23 04:17:16.656002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.667713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f7538 00:21:23.523 [2024-07-23 04:17:16.669230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.523 [2024-07-23 04:17:16.669266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.680808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f7da8 00:21:23.523 [2024-07-23 04:17:16.682290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.523 [2024-07-23 04:17:16.682324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.693817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f8618 00:21:23.523 [2024-07-23 04:17:16.695414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.523 [2024-07-23 04:17:16.695449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.707868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f8e88 00:21:23.523 [2024-07-23 04:17:16.709335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.523 [2024-07-23 04:17:16.709371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:23.523 [2024-07-23 04:17:16.724286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f96f8 00:21:23.524 [2024-07-23 04:17:16.725815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.725852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.740047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f9f68 00:21:23.524 [2024-07-23 04:17:16.741468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.741502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.753883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fa7d8 00:21:23.524 [2024-07-23 04:17:16.755223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.755260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.767012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fb048 00:21:23.524 [2024-07-23 04:17:16.768380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.768412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.780225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fb8b8 00:21:23.524 [2024-07-23 04:17:16.781494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.781530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.793278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fc128 00:21:23.524 [2024-07-23 04:17:16.794519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.794554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.806304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fc998 00:21:23.524 [2024-07-23 04:17:16.807567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.807603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.819435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fd208 00:21:23.524 [2024-07-23 04:17:16.820682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.820716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.832572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fda78 00:21:23.524 [2024-07-23 04:17:16.833783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.833817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.845704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fe2e8 00:21:23.524 [2024-07-23 04:17:16.846902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.846962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:23.524 [2024-07-23 04:17:16.858781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190feb58 00:21:23.524 [2024-07-23 04:17:16.860081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.524 [2024-07-23 04:17:16.860116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:23.782 [2024-07-23 04:17:16.877279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fef90 00:21:23.782 [2024-07-23 04:17:16.879450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.782 [2024-07-23 04:17:16.879485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.782 [2024-07-23 04:17:16.890327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190feb58 00:21:23.782 [2024-07-23 04:17:16.892557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.782 [2024-07-23 04:17:16.892591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:23.782 [2024-07-23 04:17:16.903490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fe2e8 00:21:23.782 [2024-07-23 04:17:16.905685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.782 [2024-07-23 04:17:16.905718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:23.782 [2024-07-23 04:17:16.916889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fda78 00:21:23.782 [2024-07-23 04:17:16.919013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.782 [2024-07-23 04:17:16.919073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:23.782 [2024-07-23 04:17:16.930586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fd208 00:21:23.782 [2024-07-23 04:17:16.932775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:16.932810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:16.946136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fc998 00:21:23.783 [2024-07-23 04:17:16.948496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:16.948529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:16.961045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fc128 00:21:23.783 [2024-07-23 04:17:16.963370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:16.963408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:16.975581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fb8b8 00:21:23.783 [2024-07-23 04:17:16.977757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:16.977790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:16.989064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fb048 00:21:23.783 [2024-07-23 04:17:16.991126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:16.991163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.002123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190fa7d8 00:21:23.783 [2024-07-23 04:17:17.004243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.004301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.015306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f9f68 00:21:23.783 [2024-07-23 04:17:17.017345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.017377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.028466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f96f8 00:21:23.783 [2024-07-23 04:17:17.030505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.030537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.041644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f8e88 00:21:23.783 [2024-07-23 04:17:17.043709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.043743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.055011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f8618 00:21:23.783 [2024-07-23 04:17:17.057031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.057065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.068167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f7da8 00:21:23.783 [2024-07-23 04:17:17.070151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.070184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.081481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f7538 00:21:23.783 [2024-07-23 04:17:17.083462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.083497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.094583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f6cc8 00:21:23.783 [2024-07-23 04:17:17.096584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.096618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.107829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f6458 00:21:23.783 [2024-07-23 04:17:17.109839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.109872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:23.783 [2024-07-23 04:17:17.121068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10900) with pdu=0x2000190f5be8 00:21:23.783 [2024-07-23 04:17:17.122964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.783 [2024-07-23 04:17:17.122997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:24.041 00:21:24.041 Latency(us) 00:21:24.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.041 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:24.041 nvme0n1 : 2.00 18693.29 73.02 0.00 0.00 6841.58 2576.76 25856.93 00:21:24.041 =================================================================================================================== 00:21:24.041 Total : 18693.29 73.02 0.00 0.00 6841.58 2576.76 25856.93 00:21:24.041 0 00:21:24.041 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:24.041 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:24.041 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:24.041 | .driver_specific 00:21:24.041 | .nvme_error 00:21:24.041 | .status_code 00:21:24.041 | .command_transient_transport_error' 00:21:24.041 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96614 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 96614 ']' 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 96614 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96614 00:21:24.300 killing process with pid 96614 00:21:24.300 Received shutdown signal, test time was about 2.000000 seconds 00:21:24.300 00:21:24.300 Latency(us) 00:21:24.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.300 =================================================================================================================== 00:21:24.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96614' 00:21:24.300 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 96614 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 96614 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96674 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96674 /var/tmp/bperf.sock 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 96674 ']' 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:24.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.301 04:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:24.559 [2024-07-23 04:17:17.666136] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:24.559 [2024-07-23 04:17:17.666418] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96674 ] 00:21:24.559 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.559 Zero copy mechanism will not be used. 00:21:24.559 [2024-07-23 04:17:17.791638] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:24.559 [2024-07-23 04:17:17.801573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.559 [2024-07-23 04:17:17.863175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.817 [2024-07-23 04:17:17.914538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:25.384 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.384 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:21:25.384 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:25.384 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:25.643 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:25.643 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.643 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:25.643 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.643 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:25.643 04:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:25.902 nvme0n1 00:21:25.902 04:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:25.902 04:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.902 04:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:25.902 04:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.902 04:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:25.902 04:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:25.902 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:25.902 Zero copy mechanism will not be used. 00:21:25.902 Running I/O for 2 seconds... 00:21:25.902 [2024-07-23 04:17:19.216417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:25.902 [2024-07-23 04:17:19.216699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.902 [2024-07-23 04:17:19.216728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.902 [2024-07-23 04:17:19.221043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:25.902 [2024-07-23 04:17:19.221299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.902 [2024-07-23 04:17:19.221337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.902 [2024-07-23 04:17:19.225650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:25.902 [2024-07-23 04:17:19.225921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.902 [2024-07-23 04:17:19.225959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.902 [2024-07-23 04:17:19.230277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:25.902 [2024-07-23 04:17:19.230554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.902 [2024-07-23 04:17:19.230583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.902 [2024-07-23 04:17:19.234875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:25.902 [2024-07-23 04:17:19.235240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.902 [2024-07-23 04:17:19.235282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.902 [2024-07-23 04:17:19.239624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:25.902 [2024-07-23 04:17:19.239897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.902 [2024-07-23 04:17:19.239935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.902 [2024-07-23 04:17:19.244306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:25.902 [2024-07-23 04:17:19.244585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.902 [2024-07-23 04:17:19.244613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.248855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.249165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.249209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.253377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.253647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.253675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.257943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.258214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.258241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.262393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.262661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.262689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.267177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.267496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.267524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.271955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.272283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.272348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.276483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.276753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.276781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.281051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.281321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.281348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.285476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.285745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.285772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.289968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.290240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.290268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.294401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.294673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.294700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.298689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.298765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.298787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.303167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.303252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.303274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.307607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.307689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.307711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.312102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.312182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.312204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.316497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.316576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.316597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.320974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.321052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.321073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.325445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.325525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.325547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.329879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.329999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.330022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.334433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.334516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.334537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.338864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.338972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.338995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.343476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.343563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.343585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.163 [2024-07-23 04:17:19.347986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.163 [2024-07-23 04:17:19.348064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.163 [2024-07-23 04:17:19.348086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.352380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.352459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.352480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.356880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.356991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.357013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.361355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.361434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.361455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.365744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.365822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.365844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.370290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.370370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.370391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.374625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.374707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.374729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.379079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.379161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.379183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.383456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.383536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.383557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.387890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.388006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.388028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.392326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.392406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.392427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.396725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.396803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.396824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.401228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.401306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.405634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.405716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.405737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.410158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.410256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.410277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.414462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.414543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.414564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.418836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.418959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.418981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.423421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.423497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.423518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.427835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.427920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.427981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.432351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.432428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.432449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.436752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.436830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.436852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.441228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.441315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.441336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.445700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.445784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.445805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.450206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.450286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.164 [2024-07-23 04:17:19.450308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.164 [2024-07-23 04:17:19.454611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.164 [2024-07-23 04:17:19.454689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.454710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.459164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.459247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.459270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.463639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.463717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.463739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.468091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.468171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.468193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.472484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.472561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.472582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.476886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.476999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.477023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.481350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.481431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.481452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.485746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.485828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.485849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.490333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.490412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.490434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.494750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.494841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.494863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.499295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.499414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.499435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.165 [2024-07-23 04:17:19.503767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.165 [2024-07-23 04:17:19.503846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.165 [2024-07-23 04:17:19.503868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.425 [2024-07-23 04:17:19.508225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.425 [2024-07-23 04:17:19.508326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.425 [2024-07-23 04:17:19.508347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.425 [2024-07-23 04:17:19.512669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.425 [2024-07-23 04:17:19.512776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.425 [2024-07-23 04:17:19.512797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.425 [2024-07-23 04:17:19.517277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.425 [2024-07-23 04:17:19.517358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.425 [2024-07-23 04:17:19.517380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.521697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.521775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.521798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.526332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.526429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.526452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.530664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.530756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.530777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.535199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.535284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.535307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.539658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.539736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.539757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.544150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.544231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.544269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.548571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.548661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.548683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.553143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.553222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.553244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.557547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.557622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.557644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.561931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.562023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.562045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.566360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.566440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.566462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.570689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.570765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.570786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.575212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.575297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.575320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.579748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.579832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.579855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.584368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.584448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.584470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.588773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.588864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.588886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.593376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.593456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.593478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.597815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.597900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.597951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.602254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.602347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.602369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.606682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.606776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.606797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.611281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.611379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.611400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.615730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.615811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.615832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.620284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.620364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.620385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.624727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.624815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.624837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.629277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.629359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.629380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.633648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.633729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.426 [2024-07-23 04:17:19.633751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.426 [2024-07-23 04:17:19.638141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.426 [2024-07-23 04:17:19.638221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.638243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.642558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.642638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.642660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.646950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.647074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.647096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.651536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.651613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.651634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.656078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.656160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.656182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.660457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.660547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.660569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.664892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.665006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.665043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.669425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.669531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.669552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.673925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.674007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.674029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.678300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.678392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.678414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.682747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.682837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.682858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.687451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.687535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.687557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.691870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.692001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.692023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.696376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.696453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.696474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.700740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.700819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.700840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.705314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.705397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.705419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.709738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.709816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.709838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.714330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.714425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.714447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.718727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.718816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.718837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.723243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.723331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.723369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.727685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.727780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.727802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.732189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.732297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.732318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.736631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.736711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.736733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.741144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.741235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.741257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.745545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.745632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.745654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.750029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.750110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.750131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.754460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.754548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.427 [2024-07-23 04:17:19.754570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.427 [2024-07-23 04:17:19.758977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.427 [2024-07-23 04:17:19.759107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.428 [2024-07-23 04:17:19.759129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.428 [2024-07-23 04:17:19.763473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.428 [2024-07-23 04:17:19.763553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.428 [2024-07-23 04:17:19.763574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.428 [2024-07-23 04:17:19.767916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.428 [2024-07-23 04:17:19.768039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.428 [2024-07-23 04:17:19.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.687 [2024-07-23 04:17:19.772393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.687 [2024-07-23 04:17:19.772489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.687 [2024-07-23 04:17:19.772510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.687 [2024-07-23 04:17:19.776849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.687 [2024-07-23 04:17:19.777057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.687 [2024-07-23 04:17:19.777080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.687 [2024-07-23 04:17:19.781379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.781475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.781496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.785944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.786035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.786057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.790333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.790411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.790432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.794761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.794842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.794864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.799377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.799456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.799478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.803724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.803824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.803845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.808233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.808346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.808368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.812687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.812777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.812798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.817204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.817302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.817323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.821611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.821690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.821711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.826142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.826245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.826266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.830538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.830620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.830641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.834978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.835133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.835156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.839434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.839514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.839535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.843851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.844014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.844045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.848392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.848472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.848494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.852803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.852881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.852902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.857343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.857442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.857464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.861758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.861843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.861864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.866298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.866380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.866401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.870647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.870731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.870752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.875228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.875333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.875387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.879687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.879768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.879789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.884206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.884302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.884323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.888688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.888778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.888799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.893244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.893325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.893347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.688 [2024-07-23 04:17:19.897661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.688 [2024-07-23 04:17:19.897742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.688 [2024-07-23 04:17:19.897769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.902201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.902278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.902299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.906630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.906717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.906738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.911206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.911302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.911325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.915629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.915707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.915728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.920096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.920175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.920197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.924533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.924616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.924637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.928929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.929009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.929029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.933343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.933436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.933456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.937819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.937944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.937966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.942259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.942350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.942370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.946621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.946702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.946723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.951214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.951308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.951362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.956339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.956434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.956455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.961233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.961344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.961366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.966130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.966208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.966231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.971087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.971178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.971201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.976025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.976121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.976143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.980924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.981052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.981075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.986153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.986217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.986241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.991236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.991319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.991372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:19.996324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:19.996422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:19.996447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:20.001741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:20.001807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:20.001830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:20.007120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:20.007191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:20.007215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:20.012118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:20.012185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:20.012209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:20.017340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:20.017417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:20.017439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:20.022367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:20.022452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:20.022473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.689 [2024-07-23 04:17:20.027237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.689 [2024-07-23 04:17:20.027334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.689 [2024-07-23 04:17:20.027373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.949 [2024-07-23 04:17:20.032200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.949 [2024-07-23 04:17:20.032303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.949 [2024-07-23 04:17:20.032326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.949 [2024-07-23 04:17:20.036996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.949 [2024-07-23 04:17:20.037078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.949 [2024-07-23 04:17:20.037100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.949 [2024-07-23 04:17:20.041568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.949 [2024-07-23 04:17:20.041651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.949 [2024-07-23 04:17:20.041673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.949 [2024-07-23 04:17:20.046253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.949 [2024-07-23 04:17:20.046337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.949 [2024-07-23 04:17:20.046359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.949 [2024-07-23 04:17:20.051010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.949 [2024-07-23 04:17:20.051115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.949 [2024-07-23 04:17:20.051138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.949 [2024-07-23 04:17:20.055631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.055714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.055736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.060306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.060390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.060410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.065155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.065249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.065271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.069776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.069860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.069882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.074448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.074527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.074549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.079406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.079487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.079509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.084116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.084195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.084217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.088665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.088747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.088768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.093325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.093420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.093442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.098192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.098271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.098292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.102727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.102811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.102833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.107528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.107612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.107634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.112450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.112528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.112549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.117090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.117177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.117199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.121693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.121777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.121798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.126619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.126714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.126736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.131278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.131375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.131427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.135963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.136041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.136063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.140556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.140636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.140658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.145442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.145525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.145546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.150167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.150275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.150297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.154715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.154792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.154814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.159273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.159383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.159404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.163744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.163825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.163846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.168263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.168355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.168377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.172643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.172723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.950 [2024-07-23 04:17:20.172744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.950 [2024-07-23 04:17:20.177194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.950 [2024-07-23 04:17:20.177310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.177331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.181558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.181639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.181660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.186166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.186247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.186268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.190585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.190662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.190684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.194997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.195102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.195124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.199508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.199587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.199608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.204078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.204157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.204178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.208506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.208597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.208618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.213048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.213130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.213151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.217544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.217638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.217659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.222220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.222317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.222339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.226766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.226844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.226866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.231268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.231364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.231386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.235691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.235773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.235794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.240254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.240333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.240354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.244601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.244692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.244713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.249150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.249245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.249267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.253602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.253683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.253703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.258124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.258225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.258246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.262518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.262597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.262618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.266935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.267015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.267071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.271456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.271547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.271569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.275918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.276007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.276029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.280360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.280442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.280463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.284729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.284807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.284828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.951 [2024-07-23 04:17:20.289249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:26.951 [2024-07-23 04:17:20.289358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.951 [2024-07-23 04:17:20.289379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.293702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.293794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.293815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.298223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.298323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.298344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.302622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.302709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.302730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.307136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.307232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.307256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.311639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.311726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.311747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.316111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.316188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.316210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.320559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.320637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.320659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.325004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.325099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.325120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.329456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.329541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.329563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.333912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.334016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.334038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.338431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.338513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.338534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.342888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.342995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.343017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.347340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.347467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.347488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.351836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.351915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.351966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.356307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.356400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.356421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.360764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.360847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.360869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.365248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.365345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.365366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.369714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.369803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.369825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.374260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.374350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.374371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.378662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.378755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.378777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.383290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.383391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.212 [2024-07-23 04:17:20.383413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.212 [2024-07-23 04:17:20.387677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.212 [2024-07-23 04:17:20.387772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.387793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.392242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.392324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.392345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.396636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.396712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.396734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.401144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.401237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.401259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.405582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.405663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.405685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.410117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.410200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.410223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.414491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.414584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.414606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.418976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.419081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.419103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.423439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.423518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.423540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.427845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.427977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.427999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.432300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.432391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.432413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.436718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.436809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.436830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.441228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.441333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.441365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.445641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.445732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.445753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.450162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.450244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.450265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.454499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.454591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.454613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.459046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.459130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.459153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.463509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.463599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.463620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.467950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.468040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.468061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.472363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.472440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.472461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.476785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.476866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.476888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.481356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.481498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.481519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.485717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.485797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.485818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.490231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.490309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.490330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.494583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.494671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.494695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.499119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.499214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.499237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.503625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.503702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.213 [2024-07-23 04:17:20.503723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.213 [2024-07-23 04:17:20.508190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.213 [2024-07-23 04:17:20.508292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.508313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.512703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.512782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.512803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.517249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.517332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.517353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.521617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.521708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.521730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.526090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.526179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.526201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.530540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.530631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.530652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.535016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.535130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.535152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.539498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.539589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.539610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.543975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.544064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.544085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.548442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.548523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.548544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.214 [2024-07-23 04:17:20.552881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.214 [2024-07-23 04:17:20.552991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.214 [2024-07-23 04:17:20.553024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.475 [2024-07-23 04:17:20.557348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.475 [2024-07-23 04:17:20.557426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.475 [2024-07-23 04:17:20.557447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.475 [2024-07-23 04:17:20.561772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.475 [2024-07-23 04:17:20.561854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.475 [2024-07-23 04:17:20.561876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.475 [2024-07-23 04:17:20.566252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.475 [2024-07-23 04:17:20.566361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.475 [2024-07-23 04:17:20.566382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.475 [2024-07-23 04:17:20.570682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.570761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.570782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.575205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.575290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.575313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.579640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.579719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.579741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.584192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.584287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.584308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.588601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.588699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.588721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.593179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.593261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.593282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.597578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.597685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.597706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.602068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.602150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.602172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.606470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.606560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.606582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.611009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.611125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.611148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.615483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.615576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.615598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.619993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.620081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.620104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.624439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.624531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.624553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.628906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.628986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.629008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.633323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.633415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.633437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.637749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.637826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.637847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.642323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.642405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.642427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.646714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.646797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.646818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.651214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.651294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.651316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.655670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.655751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.655772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.660490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.660567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.660589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.665026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.665119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.665141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.669387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.669464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.669484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.673767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.673858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.673880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.678290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.678385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.678406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.682669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.682761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.682783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.687061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.476 [2024-07-23 04:17:20.687153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.476 [2024-07-23 04:17:20.687187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.476 [2024-07-23 04:17:20.691492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.691573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.691595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.695968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.696045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.696067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.700328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.700414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.700435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.704752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.704829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.704850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.709257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.709365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.709386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.713707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.713787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.713808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.718149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.718232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.718253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.722655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.722733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.722755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.727154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.727236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.727259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.731580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.731681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.731703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.736081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.736163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.736184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.740426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.740519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.740540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.744935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.745022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.745043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.749315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.749408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.749430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.753668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.753762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.753783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.758190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.758310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.758337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.762596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.762676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.762697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.767145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.767247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.767270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.771625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.771707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.771728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.776149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.776232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.776253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.780503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.780590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.780623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.785012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.785104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.785132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.789367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.789468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.789490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.793800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.793881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.793918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.798264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.798366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.798388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.802683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.802774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.802798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.807161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.477 [2024-07-23 04:17:20.807251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.477 [2024-07-23 04:17:20.807283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.477 [2024-07-23 04:17:20.811637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.478 [2024-07-23 04:17:20.811725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.478 [2024-07-23 04:17:20.811746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.478 [2024-07-23 04:17:20.816208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.478 [2024-07-23 04:17:20.816289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.478 [2024-07-23 04:17:20.816311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.820634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.820715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.820737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.825207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.825294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.825328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.829573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.829661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.829683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.834026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.834109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.834131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.838439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.838534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.838556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.842889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.843001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.843049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.847343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.847450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.847471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.851857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.851987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.852009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.856389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.856464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.856485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.860827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.860950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.860973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.865329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.865412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.865432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.869702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.869782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.869803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.874259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.874355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.874376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.878685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.878763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.878784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.883259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.883345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.883399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.887700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.887779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.887800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.892254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.892349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.892370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.896677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.896760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.896781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.901211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.901298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.901319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.905682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.905763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.905784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.910142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.910237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.910258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.914616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.914693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.914715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.919126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.919202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.919225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.923535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.923622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.923643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.928030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.928111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.749 [2024-07-23 04:17:20.928132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.749 [2024-07-23 04:17:20.932430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.749 [2024-07-23 04:17:20.932537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.932557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.936842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.936981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.937002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.941371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.941459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.941479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.945808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.945899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.945966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.950314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.950418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.950439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.954757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.954839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.954861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.959281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.959351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.959387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.963755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.963840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.963861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.968353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.968435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.968469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.972944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.973044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.973065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.977385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.977476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.977497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.981841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.981967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.981990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.986330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.986421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.986442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.990756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.990832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.990854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.995239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.995306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.995329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:20.999692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:20.999772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:20.999793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.004299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.004377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.004398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.009103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.009165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.009187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.014003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.014079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.014102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.018925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.019045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.019069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.024407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.024491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.024514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.029421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.029498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.029519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.034500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.034577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.034599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.039381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.039466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.039488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.044198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.044276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.044314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.048943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.049039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.049062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.053856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.750 [2024-07-23 04:17:21.054002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.750 [2024-07-23 04:17:21.054025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.750 [2024-07-23 04:17:21.058567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.751 [2024-07-23 04:17:21.058644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.751 [2024-07-23 04:17:21.058666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.751 [2024-07-23 04:17:21.063006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.751 [2024-07-23 04:17:21.063118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.751 [2024-07-23 04:17:21.063140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.751 [2024-07-23 04:17:21.067746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.751 [2024-07-23 04:17:21.067827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.751 [2024-07-23 04:17:21.067849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.751 [2024-07-23 04:17:21.072323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.751 [2024-07-23 04:17:21.072404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.751 [2024-07-23 04:17:21.072425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.751 [2024-07-23 04:17:21.076789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.751 [2024-07-23 04:17:21.076872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.751 [2024-07-23 04:17:21.076893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.751 [2024-07-23 04:17:21.081359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.751 [2024-07-23 04:17:21.081456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.751 [2024-07-23 04:17:21.081478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.751 [2024-07-23 04:17:21.085782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.751 [2024-07-23 04:17:21.085921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.751 [2024-07-23 04:17:21.085960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.751 [2024-07-23 04:17:21.090311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:27.751 [2024-07-23 04:17:21.090393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.751 [2024-07-23 04:17:21.090415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.010 [2024-07-23 04:17:21.094751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.010 [2024-07-23 04:17:21.094857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.010 [2024-07-23 04:17:21.094879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.010 [2024-07-23 04:17:21.099254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.010 [2024-07-23 04:17:21.099387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.010 [2024-07-23 04:17:21.099409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.010 [2024-07-23 04:17:21.103643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.010 [2024-07-23 04:17:21.103724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.010 [2024-07-23 04:17:21.103745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.010 [2024-07-23 04:17:21.108107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.010 [2024-07-23 04:17:21.108189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.108210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.112557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.112638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.112660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.117039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.117145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.117167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.121445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.121525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.121553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.125904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.126044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.126083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.130365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.130457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.130477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.134815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.134952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.134974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.139402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.139484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.139506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.143801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.143901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.143923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.148795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.148876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.148898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.153701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.153782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.153805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.158534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.158634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.158655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.163705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.163788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.163810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.168888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.169047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.169072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.173878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.174044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.174085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.178788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.178881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.178905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.183725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.183827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.183849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.188686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.188786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.188807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.193744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.193824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.193846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.198336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.198427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.198449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.011 [2024-07-23 04:17:21.202922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b10c40) with pdu=0x2000190fef90 00:21:28.011 [2024-07-23 04:17:21.203018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.011 [2024-07-23 04:17:21.203067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.011 00:21:28.011 Latency(us) 00:21:28.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.011 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:28.011 nvme0n1 : 2.00 6812.95 851.62 0.00 0.00 2342.56 1697.98 9592.09 00:21:28.011 =================================================================================================================== 00:21:28.011 Total : 6812.95 851.62 0.00 0.00 2342.56 1697.98 9592.09 00:21:28.011 0 00:21:28.011 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:28.011 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:28.011 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:28.011 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:28.011 | .driver_specific 00:21:28.011 | .nvme_error 00:21:28.011 | .status_code 00:21:28.011 | .command_transient_transport_error' 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 439 > 0 )) 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96674 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 96674 ']' 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 96674 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96674 00:21:28.270 killing process with pid 96674 00:21:28.270 Received shutdown signal, test time was about 2.000000 seconds 00:21:28.270 00:21:28.270 Latency(us) 00:21:28.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.270 =================================================================================================================== 00:21:28.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96674' 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 96674 00:21:28.270 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 96674 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 96476 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 96476 ']' 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 96476 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96476 00:21:28.528 killing process with pid 96476 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96476' 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 96476 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 96476 00:21:28.528 00:21:28.528 real 0m16.944s 00:21:28.528 user 0m31.821s 00:21:28.528 sys 0m4.815s 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.528 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.528 ************************************ 00:21:28.528 END TEST nvmf_digest_error 00:21:28.528 ************************************ 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.787 04:17:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.787 rmmod nvme_tcp 00:21:28.787 rmmod nvme_fabrics 00:21:28.787 rmmod nvme_keyring 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 96476 ']' 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 96476 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 96476 ']' 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 96476 00:21:28.787 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96476) - No such process 00:21:28.787 Process with pid 96476 is not found 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 96476 is not found' 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.787 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:28.788 00:21:28.788 real 0m34.358s 00:21:28.788 user 1m4.147s 00:21:28.788 sys 0m9.884s 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:28.788 ************************************ 00:21:28.788 END TEST nvmf_digest 00:21:28.788 ************************************ 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.788 ************************************ 00:21:28.788 START TEST nvmf_host_multipath 00:21:28.788 ************************************ 00:21:28.788 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:29.047 * Looking for test storage... 00:21:29.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:29.047 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:29.048 Cannot find device "nvmf_tgt_br" 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:29.048 Cannot find device "nvmf_tgt_br2" 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:29.048 Cannot find device "nvmf_tgt_br" 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:29.048 Cannot find device "nvmf_tgt_br2" 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:29.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:29.048 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:29.048 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:29.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:21:29.307 00:21:29.307 --- 10.0.0.2 ping statistics --- 00:21:29.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.307 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:29.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:29.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:29.307 00:21:29.307 --- 10.0.0.3 ping statistics --- 00:21:29.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.307 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:29.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:21:29.307 00:21:29.307 --- 10.0.0.1 ping statistics --- 00:21:29.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.307 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=96939 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 96939 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 96939 ']' 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.307 04:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:29.566 [2024-07-23 04:17:22.651439] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:21:29.566 [2024-07-23 04:17:22.651528] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.566 [2024-07-23 04:17:22.774538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:29.566 [2024-07-23 04:17:22.792282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:29.566 [2024-07-23 04:17:22.847772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.566 [2024-07-23 04:17:22.847824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.566 [2024-07-23 04:17:22.847833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.566 [2024-07-23 04:17:22.847840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.566 [2024-07-23 04:17:22.847846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.566 [2024-07-23 04:17:22.847976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.566 [2024-07-23 04:17:22.848243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.566 [2024-07-23 04:17:22.898389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96939 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:30.503 [2024-07-23 04:17:23.791262] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.503 04:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:30.761 Malloc0 00:21:30.761 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:31.020 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.278 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.278 [2024-07-23 04:17:24.601718] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.278 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.536 [2024-07-23 04:17:24.789783] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96984 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96984 /var/tmp/bdevperf.sock 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 96984 ']' 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.536 04:17:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:32.470 04:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.470 04:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:32.470 04:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:32.728 04:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:32.986 Nvme0n1 00:21:32.986 04:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:33.243 Nvme0n1 00:21:33.243 04:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:33.243 04:17:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:34.176 04:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:34.176 04:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:34.434 04:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:34.692 04:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:34.692 04:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97029 00:21:34.692 04:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:34.692 04:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:41.250 Attaching 4 probes... 00:21:41.250 @path[10.0.0.2, 4421]: 19917 00:21:41.250 @path[10.0.0.2, 4421]: 20056 00:21:41.250 @path[10.0.0.2, 4421]: 19988 00:21:41.250 @path[10.0.0.2, 4421]: 20048 00:21:41.250 @path[10.0.0.2, 4421]: 19989 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97029 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:41.250 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:41.251 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:41.509 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:41.509 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97147 00:21:41.509 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:41.509 04:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:48.105 04:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:48.105 04:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:48.105 04:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:48.105 04:17:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:48.105 Attaching 4 probes... 00:21:48.105 @path[10.0.0.2, 4420]: 21079 00:21:48.105 @path[10.0.0.2, 4420]: 21355 00:21:48.105 @path[10.0.0.2, 4420]: 21441 00:21:48.105 @path[10.0.0.2, 4420]: 21768 00:21:48.105 @path[10.0.0.2, 4420]: 21448 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97147 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:48.105 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:48.363 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:48.364 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:48.364 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97261 00:21:48.364 04:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.928 Attaching 4 probes... 00:21:54.928 @path[10.0.0.2, 4421]: 12530 00:21:54.928 @path[10.0.0.2, 4421]: 19783 00:21:54.928 @path[10.0.0.2, 4421]: 19838 00:21:54.928 @path[10.0.0.2, 4421]: 19917 00:21:54.928 @path[10.0.0.2, 4421]: 19916 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97261 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:54.928 04:17:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:54.928 04:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:54.928 04:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97373 00:21:54.928 04:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:54.928 04:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.489 Attaching 4 probes... 00:22:01.489 00:22:01.489 00:22:01.489 00:22:01.489 00:22:01.489 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97373 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:01.489 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:01.747 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:01.747 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97493 00:22:01.747 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:01.747 04:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:08.316 04:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:08.316 04:18:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:08.316 Attaching 4 probes... 00:22:08.316 @path[10.0.0.2, 4421]: 19086 00:22:08.316 @path[10.0.0.2, 4421]: 19377 00:22:08.316 @path[10.0.0.2, 4421]: 19571 00:22:08.316 @path[10.0.0.2, 4421]: 19437 00:22:08.316 @path[10.0.0.2, 4421]: 19400 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97493 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:08.316 04:18:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:09.250 04:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:09.250 04:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97612 00:22:09.250 04:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:09.250 04:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:15.813 Attaching 4 probes... 00:22:15.813 @path[10.0.0.2, 4420]: 20233 00:22:15.813 @path[10.0.0.2, 4420]: 20675 00:22:15.813 @path[10.0.0.2, 4420]: 20528 00:22:15.813 @path[10.0.0.2, 4420]: 20816 00:22:15.813 @path[10.0.0.2, 4420]: 20728 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97612 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:15.813 04:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:15.813 [2024-07-23 04:18:08.990943] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:15.813 04:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:16.072 04:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:22.633 04:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:22.633 04:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97788 00:22:22.633 04:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96939 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:22.633 04:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:29.213 Attaching 4 probes... 00:22:29.213 @path[10.0.0.2, 4421]: 18649 00:22:29.213 @path[10.0.0.2, 4421]: 19309 00:22:29.213 @path[10.0.0.2, 4421]: 19159 00:22:29.213 @path[10.0.0.2, 4421]: 19129 00:22:29.213 @path[10.0.0.2, 4421]: 19214 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97788 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96984 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 96984 ']' 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 96984 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96984 00:22:29.213 killing process with pid 96984 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96984' 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 96984 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 96984 00:22:29.213 Connection closed with partial response: 00:22:29.213 00:22:29.213 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96984 00:22:29.213 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:29.213 [2024-07-23 04:17:24.851534] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:29.213 [2024-07-23 04:17:24.851701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96984 ] 00:22:29.213 [2024-07-23 04:17:24.969016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:29.213 [2024-07-23 04:17:24.987760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.213 [2024-07-23 04:17:25.057513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.213 [2024-07-23 04:17:25.112697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:29.213 Running I/O for 90 seconds... 00:22:29.213 [2024-07-23 04:17:34.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.213 [2024-07-23 04:17:34.715579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.213 [2024-07-23 04:17:34.715670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.213 [2024-07-23 04:17:34.715707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.213 [2024-07-23 04:17:34.715741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.213 [2024-07-23 04:17:34.715774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.213 [2024-07-23 04:17:34.715808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.213 [2024-07-23 04:17:34.715841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.213 [2024-07-23 04:17:34.715874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.213 [2024-07-23 04:17:34.715919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.715943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.213 [2024-07-23 04:17:34.715998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.716021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.213 [2024-07-23 04:17:34.716036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.716055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.213 [2024-07-23 04:17:34.716070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.716089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.213 [2024-07-23 04:17:34.716104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.716124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.213 [2024-07-23 04:17:34.716138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:29.213 [2024-07-23 04:17:34.716157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.213 [2024-07-23 04:17:34.716171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.716978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.716998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.717013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.214 [2024-07-23 04:17:34.717046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.214 [2024-07-23 04:17:34.717595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:29.214 [2024-07-23 04:17:34.717614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.717634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.717668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.717700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.717754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.717788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.717821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.717854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.717886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.717950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.717988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.215 [2024-07-23 04:17:34.718308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:29.215 [2024-07-23 04:17:34.718802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.215 [2024-07-23 04:17:34.718816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.718835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.718849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.718871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.216 [2024-07-23 04:17:34.718887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.718921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.216 [2024-07-23 04:17:34.718946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.718970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.216 [2024-07-23 04:17:34.718985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.216 [2024-07-23 04:17:34.719055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.216 [2024-07-23 04:17:34.719094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.216 [2024-07-23 04:17:34.719129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.216 [2024-07-23 04:17:34.719166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.216 [2024-07-23 04:17:34.719201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.719963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.719997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.720014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.720034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.720051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:29.216 [2024-07-23 04:17:34.720071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.216 [2024-07-23 04:17:34.720086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:34.720187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:34.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:34.720258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:34.720308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:34.720356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:34.720390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:34.720424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:34.720472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:34.720728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:34.720742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:41.249519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:41.249586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:41.249621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:41.249672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:41.249707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:41.249739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:41.249771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.217 [2024-07-23 04:17:41.249802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.249839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.249870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.249902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.249951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.249969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.249983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.250002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.250015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.250033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.250046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.250064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.250077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:29.217 [2024-07-23 04:17:41.250119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.217 [2024-07-23 04:17:41.250135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.218 [2024-07-23 04:17:41.250169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.218 [2024-07-23 04:17:41.250201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.218 [2024-07-23 04:17:41.250233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.218 [2024-07-23 04:17:41.250264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.218 [2024-07-23 04:17:41.250313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.218 [2024-07-23 04:17:41.250345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.218 [2024-07-23 04:17:41.250379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.250616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.250652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.250688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.250721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.250820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.250871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.250923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.250948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.250978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.218 [2024-07-23 04:17:41.251407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.218 [2024-07-23 04:17:41.251428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.251443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.251477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.251524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.251571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.251602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.251634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.251672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.251722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.251775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.251826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.251859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.251932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.251970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.251990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.219 [2024-07-23 04:17:41.252586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.252637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.252690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:29.219 [2024-07-23 04:17:41.252710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.219 [2024-07-23 04:17:41.252725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.252760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.252775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.252795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.252818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.252838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.252853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.252872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.252887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.252907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.252922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.252948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.252978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.220 [2024-07-23 04:17:41.253672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.253705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.253738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.253771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.253803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.253847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.253883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.253902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:29.220 [2024-07-23 04:17:41.255125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.220 [2024-07-23 04:17:41.255155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.255950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.255980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.221 [2024-07-23 04:17:41.256627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.221 [2024-07-23 04:17:41.256645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.221 [2024-07-23 04:17:41.256659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.256969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.256983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.257015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.257047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.257079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.257118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.257158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.222 [2024-07-23 04:17:41.257193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.222 [2024-07-23 04:17:41.257983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.222 [2024-07-23 04:17:41.257997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.223 [2024-07-23 04:17:41.269400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.223 [2024-07-23 04:17:41.269434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.223 [2024-07-23 04:17:41.269482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.223 [2024-07-23 04:17:41.269516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.223 [2024-07-23 04:17:41.269549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.223 [2024-07-23 04:17:41.269582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.223 [2024-07-23 04:17:41.269616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.223 [2024-07-23 04:17:41.269650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.269958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.269989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.270011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.270026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.270046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.270061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.270080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.270095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.270115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.270129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.270148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.270163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.270183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.270197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:29.223 [2024-07-23 04:17:41.270217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.223 [2024-07-23 04:17:41.270247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.270326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.270358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.270398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.270432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.270465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.270497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.270529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.270561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.270968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.270991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.224 [2024-07-23 04:17:41.271318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.271366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.271398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.271429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:29.224 [2024-07-23 04:17:41.271448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.224 [2024-07-23 04:17:41.271461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.225 [2024-07-23 04:17:41.271497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.225 [2024-07-23 04:17:41.271529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.225 [2024-07-23 04:17:41.271561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.225 [2024-07-23 04:17:41.271593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.271964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.271987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.272971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.272998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.273017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.273043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.273062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.273089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.273107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.273143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.273163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:29.225 [2024-07-23 04:17:41.273190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.225 [2024-07-23 04:17:41.273209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.273267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.273955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.273984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.274004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.275717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.226 [2024-07-23 04:17:41.275756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.275793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.275815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.275843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.275862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.275888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.275946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.275977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.275996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.226 [2024-07-23 04:17:41.276591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.226 [2024-07-23 04:17:41.276612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.276639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.276658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.276685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.276713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.276742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.276761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.276788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.276807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.276833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.276852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.276878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.276913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.276953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.276973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.276999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.277018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.277064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.277108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.277154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.277199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.277245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.277972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.277998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.278017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.278044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.227 [2024-07-23 04:17:41.278063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.278089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.278108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.278136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.278155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.278181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.278200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.278226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.278245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.278279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.278306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.278332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.227 [2024-07-23 04:17:41.278351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:29.227 [2024-07-23 04:17:41.278377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.278396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.278441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.278979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.278998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.279038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.279062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.279089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.279109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.279135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.279163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.279191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.279211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.279238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.279257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.279283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.279301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.279337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.279357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.280614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.280648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.280682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.280704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.280731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.280750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.280778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.280798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.280825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.280844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.280870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.280889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.280942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.280963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.280990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.281022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.281052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.281072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.281099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.228 [2024-07-23 04:17:41.281118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.281144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.281163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.281189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.281208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.281235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.228 [2024-07-23 04:17:41.281254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:29.228 [2024-07-23 04:17:41.281288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.281969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.281996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.229 [2024-07-23 04:17:41.282641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.229 [2024-07-23 04:17:41.282686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:29.229 [2024-07-23 04:17:41.282712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.229 [2024-07-23 04:17:41.282731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.282758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.282777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.282803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.282831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.282859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.282878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.282925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.282948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.282975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.282994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.230 [2024-07-23 04:17:41.283432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.283977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.283991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.284010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.284025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.284044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.284057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.284077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.284091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.284110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.284125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.284143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.284157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.284177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.230 [2024-07-23 04:17:41.284191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.230 [2024-07-23 04:17:41.284210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.284224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.284257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.284304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.284345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.284377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.284409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.284441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.284473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.284978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.284997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.285011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.285045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.285078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.285111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.285144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.285184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.285225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.285265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.285312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.231 [2024-07-23 04:17:41.285346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.285379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.231 [2024-07-23 04:17:41.285411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.231 [2024-07-23 04:17:41.285430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.285857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.285876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.293226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.293275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.293295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:41.293717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:41.293744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.232 [2024-07-23 04:17:48.221444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.232 [2024-07-23 04:17:48.221478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.232 [2024-07-23 04:17:48.221510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.232 [2024-07-23 04:17:48.221542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.232 [2024-07-23 04:17:48.221573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.232 [2024-07-23 04:17:48.221603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.232 [2024-07-23 04:17:48.221642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.232 [2024-07-23 04:17:48.221676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.232 [2024-07-23 04:17:48.221963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.232 [2024-07-23 04:17:48.221979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.221999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.233 [2024-07-23 04:17:48.222454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.222981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.222995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.223015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.223040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.223062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.223077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:29.233 [2024-07-23 04:17:48.223097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.233 [2024-07-23 04:17:48.223112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.234 [2024-07-23 04:17:48.223814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.223967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.223981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.224001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.224016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.224036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.224056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.224095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.234 [2024-07-23 04:17:48.224110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.234 [2024-07-23 04:17:48.224131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.224443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.224976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.224991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.225036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.225071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.235 [2024-07-23 04:17:48.225105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.225140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.225175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.225210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.225244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.225278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.225313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.235 [2024-07-23 04:17:48.225333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.235 [2024-07-23 04:17:48.225347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:17:48.225700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:17:48.225983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:17:48.225998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:18:01.463651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:18:01.463740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:18:01.463775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:18:01.463807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:18:01.463839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:18:01.463873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:18:01.463920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.236 [2024-07-23 04:18:01.463970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.463989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:18:01.464023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.464045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:18:01.464059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.464078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:18:01.464092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.464111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:18:01.464124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.464143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:18:01.464156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.464175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.236 [2024-07-23 04:18:01.464188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.236 [2024-07-23 04:18:01.464207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.464531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.464983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.464995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.237 [2024-07-23 04:18:01.465233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.465259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.237 [2024-07-23 04:18:01.465272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.237 [2024-07-23 04:18:01.465284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.465674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.465700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.465725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.465751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.465777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.465802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.465827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.465858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.465986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.465999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.466012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.466025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.466038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.466050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.466064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.466076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.466090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.238 [2024-07-23 04:18:01.466102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.466116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.466128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.466142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.466154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.238 [2024-07-23 04:18:01.466168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.238 [2024-07-23 04:18:01.466180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.239 [2024-07-23 04:18:01.466535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.239 [2024-07-23 04:18:01.466931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.239 [2024-07-23 04:18:01.466946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.466960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.466972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.466985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.466997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.467049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.467077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.467104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.467130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.467156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.467182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc79c40 is same with the state(5) to be set 00:22:29.240 [2024-07-23 04:18:01.467211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45920 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46376 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46384 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46392 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46400 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46408 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46416 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46424 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:29.240 [2024-07-23 04:18:01.467582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:29.240 [2024-07-23 04:18:01.467591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46432 len:8 PRP1 0x0 PRP2 0x0 00:22:29.240 [2024-07-23 04:18:01.467602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467674] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc79c40 was disconnected and freed. reset controller. 00:22:29.240 [2024-07-23 04:18:01.467798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.240 [2024-07-23 04:18:01.467824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.240 [2024-07-23 04:18:01.467850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.240 [2024-07-23 04:18:01.467874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.240 [2024-07-23 04:18:01.467912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.240 [2024-07-23 04:18:01.467940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.240 [2024-07-23 04:18:01.467957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc54db0 is same with the state(5) to be set 00:22:29.240 [2024-07-23 04:18:01.468962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:29.240 [2024-07-23 04:18:01.468997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc54db0 (9): Bad file descriptor 00:22:29.240 [2024-07-23 04:18:01.469369] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.240 [2024-07-23 04:18:01.469400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc54db0 with addr=10.0.0.2, port=4421 00:22:29.240 [2024-07-23 04:18:01.469415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc54db0 is same with the state(5) to be set 00:22:29.240 [2024-07-23 04:18:01.469448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc54db0 (9): Bad file descriptor 00:22:29.240 [2024-07-23 04:18:01.469476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:29.240 [2024-07-23 04:18:01.469491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:29.240 [2024-07-23 04:18:01.469504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:29.240 [2024-07-23 04:18:01.469541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:29.240 [2024-07-23 04:18:01.469556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:29.240 [2024-07-23 04:18:11.531203] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:29.241 Received shutdown signal, test time was about 54.957356 seconds 00:22:29.241 00:22:29.241 Latency(us) 00:22:29.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.241 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:29.241 Verification LBA range: start 0x0 length 0x4000 00:22:29.241 Nvme0n1 : 54.96 8490.94 33.17 0.00 0.00 15046.90 696.32 7046430.72 00:22:29.241 =================================================================================================================== 00:22:29.241 Total : 8490.94 33.17 0.00 0.00 15046.90 696.32 7046430.72 00:22:29.241 04:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.241 rmmod nvme_tcp 00:22:29.241 rmmod nvme_fabrics 00:22:29.241 rmmod nvme_keyring 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 96939 ']' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 96939 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 96939 ']' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 96939 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96939 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:29.241 killing process with pid 96939 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96939' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 96939 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 96939 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:29.241 00:22:29.241 real 1m0.298s 00:22:29.241 user 2m45.540s 00:22:29.241 sys 0m18.921s 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:29.241 ************************************ 00:22:29.241 END TEST nvmf_host_multipath 00:22:29.241 ************************************ 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.241 ************************************ 00:22:29.241 START TEST nvmf_timeout 00:22:29.241 ************************************ 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:29.241 * Looking for test storage... 00:22:29.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.241 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:29.501 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:29.502 Cannot find device "nvmf_tgt_br" 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:29.502 Cannot find device "nvmf_tgt_br2" 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:29.502 Cannot find device "nvmf_tgt_br" 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:29.502 Cannot find device "nvmf_tgt_br2" 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:29.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:29.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:29.502 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:29.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:22:29.761 00:22:29.761 --- 10.0.0.2 ping statistics --- 00:22:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.761 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:29.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:29.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:22:29.761 00:22:29.761 --- 10.0.0.3 ping statistics --- 00:22:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.761 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:29.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:29.761 00:22:29.761 --- 10.0.0.1 ping statistics --- 00:22:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.761 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=98094 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 98094 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 98094 ']' 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.761 04:18:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:29.761 [2024-07-23 04:18:22.966869] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:29.761 [2024-07-23 04:18:22.966976] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.761 [2024-07-23 04:18:23.090852] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:29.761 [2024-07-23 04:18:23.103070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:30.021 [2024-07-23 04:18:23.159274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.021 [2024-07-23 04:18:23.159352] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.021 [2024-07-23 04:18:23.159361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.021 [2024-07-23 04:18:23.159368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.021 [2024-07-23 04:18:23.159374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.021 [2024-07-23 04:18:23.159483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.021 [2024-07-23 04:18:23.159748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.021 [2024-07-23 04:18:23.209987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:30.588 04:18:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.588 04:18:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:30.588 04:18:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.588 04:18:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.588 04:18:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:30.588 04:18:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.588 04:18:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.588 04:18:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:30.846 [2024-07-23 04:18:24.178306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.104 04:18:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:31.362 Malloc0 00:22:31.362 04:18:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.619 04:18:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.620 04:18:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.878 [2024-07-23 04:18:25.171703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=98142 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 98142 /var/tmp/bdevperf.sock 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 98142 ']' 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.878 04:18:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:32.137 [2024-07-23 04:18:25.230389] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:32.137 [2024-07-23 04:18:25.230471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98142 ] 00:22:32.137 [2024-07-23 04:18:25.347937] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:32.137 [2024-07-23 04:18:25.367050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.137 [2024-07-23 04:18:25.434613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.483 [2024-07-23 04:18:25.486263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:33.049 04:18:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.049 04:18:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:33.049 04:18:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:33.049 04:18:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:33.307 NVMe0n1 00:22:33.307 04:18:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=98167 00:22:33.307 04:18:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:33.307 04:18:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:33.565 Running I/O for 10 seconds... 00:22:34.503 04:18:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.764 [2024-07-23 04:18:27.859832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.764 [2024-07-23 04:18:27.859922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.764 [2024-07-23 04:18:27.859945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.764 [2024-07-23 04:18:27.859955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.764 [2024-07-23 04:18:27.859966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.764 [2024-07-23 04:18:27.859975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.859985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.859994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.860013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.860031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.860050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.860068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.860962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.860971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.861090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.861111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.861132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.861398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.861422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.861443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.861463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.861483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.861503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.861522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.765 [2024-07-23 04:18:27.861827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.861856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.861876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.861931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.861954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.861966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.861975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.862086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.862103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.862114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.862257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.862506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.862526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.862540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.765 [2024-07-23 04:18:27.862550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.765 [2024-07-23 04:18:27.862561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.862570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.862580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.862590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.862600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.862609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.862620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.862628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.862984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.863583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.863933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.863960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.863979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.863991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.864636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.864940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.865035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.865060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.766 [2024-07-23 04:18:27.865081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.865102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.865123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.865143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.865419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.865555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.865692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.865722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.865986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.766 [2024-07-23 04:18:27.865998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.766 [2024-07-23 04:18:27.866007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.866228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.866254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.866274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.866294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.866439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.866679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.866705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.767 [2024-07-23 04:18:27.866724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.767 [2024-07-23 04:18:27.866841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.767 [2024-07-23 04:18:27.866869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.866880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.767 [2024-07-23 04:18:27.867174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.767 [2024-07-23 04:18:27.867293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.767 [2024-07-23 04:18:27.867314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.767 [2024-07-23 04:18:27.867334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:34.767 [2024-07-23 04:18:27.867368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.867388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.867524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.867667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.867795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.867812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.867822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.868077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.767 [2024-07-23 04:18:27.868099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb7f10 is same with the state(5) to be set 00:22:34.767 [2024-07-23 04:18:27.868124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.767 [2024-07-23 04:18:27.868135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.767 [2024-07-23 04:18:27.868143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74456 len:8 PRP1 0x0 PRP2 0x0 00:22:34.767 [2024-07-23 04:18:27.868152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.767 [2024-07-23 04:18:27.868170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.767 [2024-07-23 04:18:27.868177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74784 len:8 PRP1 0x0 PRP2 0x0 00:22:34.767 [2024-07-23 04:18:27.868186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.767 [2024-07-23 04:18:27.868477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.767 [2024-07-23 04:18:27.868486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74792 len:8 PRP1 0x0 PRP2 0x0 00:22:34.767 [2024-07-23 04:18:27.868495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.767 [2024-07-23 04:18:27.868512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.767 [2024-07-23 04:18:27.868519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74800 len:8 PRP1 0x0 PRP2 0x0 00:22:34.767 [2024-07-23 04:18:27.868528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.767 [2024-07-23 04:18:27.868544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.767 [2024-07-23 04:18:27.868551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74808 len:8 PRP1 0x0 PRP2 0x0 00:22:34.767 [2024-07-23 04:18:27.868559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.767 [2024-07-23 04:18:27.868575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.767 [2024-07-23 04:18:27.868694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74816 len:8 PRP1 0x0 PRP2 0x0 00:22:34.767 [2024-07-23 04:18:27.868708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.767 [2024-07-23 04:18:27.868849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.767 [2024-07-23 04:18:27.868860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.767 [2024-07-23 04:18:27.869080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74824 len:8 PRP1 0x0 PRP2 0x0 00:22:34.767 [2024-07-23 04:18:27.869106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.869118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.869126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.869134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74832 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.869143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.869152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.869160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.869167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74840 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.869175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.869185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.869191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.869199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74848 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.869207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.869217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.869335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.869348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74856 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.869487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.869501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.869611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.869629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74864 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.869639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.869649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.869759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.869770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74872 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.869779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.869790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.869938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.870141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74880 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.870154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.870164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.870173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.870181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74888 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.870190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.870199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.870206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.870214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74896 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.870223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.870232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.870239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.870247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74904 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.870255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.870497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.870511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.870519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.870656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.870763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.870775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.870783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74920 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.870791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.870800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.870884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.870932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74928 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.871034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.871049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.871057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.871065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74936 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.871077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.871189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.871209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.871218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74944 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.871352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.871480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.871498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.871605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74952 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.871623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.871634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.871642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.871889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74960 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.871931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.871943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.768 [2024-07-23 04:18:27.871951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.768 [2024-07-23 04:18:27.871959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74968 len:8 PRP1 0x0 PRP2 0x0 00:22:34.768 [2024-07-23 04:18:27.871968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.872249] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeb7f10 was disconnected and freed. reset controller. 00:22:34.768 [2024-07-23 04:18:27.872490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.768 [2024-07-23 04:18:27.872569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.768 [2024-07-23 04:18:27.872583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.769 [2024-07-23 04:18:27.872592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.769 [2024-07-23 04:18:27.872602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.769 [2024-07-23 04:18:27.872611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.769 [2024-07-23 04:18:27.872620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.769 [2024-07-23 04:18:27.872629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.769 [2024-07-23 04:18:27.872756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebccb0 is same with the state(5) to be set 00:22:34.769 [2024-07-23 04:18:27.873184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.769 [2024-07-23 04:18:27.873226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebccb0 (9): Bad file descriptor 00:22:34.769 [2024-07-23 04:18:27.873538] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.769 [2024-07-23 04:18:27.873577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebccb0 with addr=10.0.0.2, port=4420 00:22:34.769 [2024-07-23 04:18:27.873590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebccb0 is same with the state(5) to be set 00:22:34.769 [2024-07-23 04:18:27.873610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebccb0 (9): Bad file descriptor 00:22:34.769 [2024-07-23 04:18:27.873626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.769 [2024-07-23 04:18:27.874028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:34.769 [2024-07-23 04:18:27.874137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.769 [2024-07-23 04:18:27.874164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.769 [2024-07-23 04:18:27.874176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.769 04:18:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:36.774 [2024-07-23 04:18:29.874382] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.774 [2024-07-23 04:18:29.874443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebccb0 with addr=10.0.0.2, port=4420 00:22:36.774 [2024-07-23 04:18:29.874472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebccb0 is same with the state(5) to be set 00:22:36.774 [2024-07-23 04:18:29.874491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebccb0 (9): Bad file descriptor 00:22:36.774 [2024-07-23 04:18:29.874507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:36.774 [2024-07-23 04:18:29.874516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:36.774 [2024-07-23 04:18:29.874524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:36.774 [2024-07-23 04:18:29.874543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.774 [2024-07-23 04:18:29.874554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:36.774 04:18:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:36.774 04:18:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:36.774 04:18:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:37.033 04:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:37.033 04:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:37.033 04:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:37.033 04:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:37.292 04:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:37.292 04:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:38.669 [2024-07-23 04:18:31.874664] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.669 [2024-07-23 04:18:31.874744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebccb0 with addr=10.0.0.2, port=4420 00:22:38.669 [2024-07-23 04:18:31.874760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebccb0 is same with the state(5) to be set 00:22:38.669 [2024-07-23 04:18:31.874784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebccb0 (9): Bad file descriptor 00:22:38.669 [2024-07-23 04:18:31.874801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.669 [2024-07-23 04:18:31.874811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:38.669 [2024-07-23 04:18:31.874821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.669 [2024-07-23 04:18:31.874844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.669 [2024-07-23 04:18:31.874855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.573 [2024-07-23 04:18:33.875208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.573 [2024-07-23 04:18:33.875265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:40.573 [2024-07-23 04:18:33.875294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:40.573 [2024-07-23 04:18:33.875304] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:40.573 [2024-07-23 04:18:33.875343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:41.950 00:22:41.950 Latency(us) 00:22:41.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.950 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.950 Verification LBA range: start 0x0 length 0x4000 00:22:41.950 NVMe0n1 : 8.14 1136.01 4.44 15.73 0.00 111186.77 3321.48 7046430.72 00:22:41.950 =================================================================================================================== 00:22:41.950 Total : 1136.01 4.44 15.73 0.00 111186.77 3321.48 7046430.72 00:22:41.950 0 00:22:42.209 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:42.209 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:42.209 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:42.468 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:42.468 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:42.468 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:42.468 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 98167 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 98142 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 98142 ']' 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 98142 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98142 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:42.727 killing process with pid 98142 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98142' 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 98142 00:22:42.727 Received shutdown signal, test time was about 9.123889 seconds 00:22:42.727 00:22:42.727 Latency(us) 00:22:42.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.727 =================================================================================================================== 00:22:42.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.727 04:18:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 98142 00:22:42.727 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.986 [2024-07-23 04:18:36.291113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=98283 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 98283 /var/tmp/bdevperf.sock 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 98283 ']' 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.986 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:43.245 [2024-07-23 04:18:36.353550] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:22:43.245 [2024-07-23 04:18:36.353653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98283 ] 00:22:43.245 [2024-07-23 04:18:36.470046] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:43.245 [2024-07-23 04:18:36.487354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.245 [2024-07-23 04:18:36.543811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.504 [2024-07-23 04:18:36.595769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:43.504 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.504 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:43.504 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:43.763 04:18:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:44.022 NVMe0n1 00:22:44.022 04:18:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98296 00:22:44.022 04:18:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:44.022 04:18:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:44.022 Running I/O for 10 seconds... 00:22:44.957 04:18:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.320 [2024-07-23 04:18:38.412920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.412993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.413666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.413686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.413705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.413723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.413741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.413760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.413771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.413779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.414785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.414804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.414840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.414850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.415146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.415278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.415290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.415427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.415514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.415532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.415542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.415553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.320 [2024-07-23 04:18:38.415561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.415572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.320 [2024-07-23 04:18:38.415581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.320 [2024-07-23 04:18:38.415698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.415711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.415721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.415730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.415875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.415890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.416980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.416991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.321 [2024-07-23 04:18:38.417494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.321 [2024-07-23 04:18:38.417505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.322 [2024-07-23 04:18:38.417863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170bf10 is same with the state(5) to be set 00:22:45.322 [2024-07-23 04:18:38.417885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.322 [2024-07-23 04:18:38.417893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.322 [2024-07-23 04:18:38.417901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76816 len:8 PRP1 0x0 PRP2 0x0 00:22:45.322 [2024-07-23 04:18:38.417910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.322 [2024-07-23 04:18:38.417989] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x170bf10 was disconnected and freed. reset controller. 00:22:45.322 [2024-07-23 04:18:38.418249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:45.322 [2024-07-23 04:18:38.418975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1710cb0 (9): Bad file descriptor 00:22:45.322 [2024-07-23 04:18:38.419188] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.322 [2024-07-23 04:18:38.419226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1710cb0 with addr=10.0.0.2, port=4420 00:22:45.322 [2024-07-23 04:18:38.419253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1710cb0 is same with the state(5) to be set 00:22:45.322 [2024-07-23 04:18:38.419371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1710cb0 (9): Bad file descriptor 00:22:45.322 [2024-07-23 04:18:38.419515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:45.322 [2024-07-23 04:18:38.419531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:45.322 [2024-07-23 04:18:38.419672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:45.322 [2024-07-23 04:18:38.419800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:45.322 [2024-07-23 04:18:38.419925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:45.322 04:18:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:46.256 [2024-07-23 04:18:39.420007] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.256 [2024-07-23 04:18:39.420269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1710cb0 with addr=10.0.0.2, port=4420 00:22:46.256 [2024-07-23 04:18:39.420607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1710cb0 is same with the state(5) to be set 00:22:46.256 [2024-07-23 04:18:39.420980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1710cb0 (9): Bad file descriptor 00:22:46.256 [2024-07-23 04:18:39.421339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:46.256 [2024-07-23 04:18:39.421666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:46.256 [2024-07-23 04:18:39.421684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.256 [2024-07-23 04:18:39.421707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.256 [2024-07-23 04:18:39.421719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.256 04:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.514 [2024-07-23 04:18:39.620386] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.514 04:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 98296 00:22:47.447 [2024-07-23 04:18:40.436713] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.023 00:22:54.024 Latency(us) 00:22:54.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.024 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.024 Verification LBA range: start 0x0 length 0x4000 00:22:54.024 NVMe0n1 : 10.01 5835.54 22.80 0.00 0.00 21892.98 1966.08 3019898.88 00:22:54.024 =================================================================================================================== 00:22:54.024 Total : 5835.54 22.80 0.00 0.00 21892.98 1966.08 3019898.88 00:22:54.024 0 00:22:54.024 04:18:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98404 00:22:54.024 04:18:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:54.024 04:18:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:54.282 Running I/O for 10 seconds... 00:22:55.222 04:18:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.222 [2024-07-23 04:18:48.535001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.222 [2024-07-23 04:18:48.535496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.535996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.536003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.536012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.536019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.536027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.536034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5870 is same with the state(5) to be set 00:22:55.223 [2024-07-23 04:18:48.536122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.536150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.536171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.536181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.536842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.536960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.536976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.536985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.536996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.537005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.537015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.537023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.537034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.537042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.537445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.537469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.537480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.537489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.537500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.537508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.537519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.537529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.537539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.223 [2024-07-23 04:18:48.537548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.223 [2024-07-23 04:18:48.537559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.537981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.537993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.224 [2024-07-23 04:18:48.538725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.224 [2024-07-23 04:18:48.538739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.538987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.538998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.539964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.539976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.225 [2024-07-23 04:18:48.540000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.225 [2024-07-23 04:18:48.540011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.226 [2024-07-23 04:18:48.540737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.540984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.540993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.226 [2024-07-23 04:18:48.541003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.226 [2024-07-23 04:18:48.541016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.227 [2024-07-23 04:18:48.541035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.227 [2024-07-23 04:18:48.541070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:55.227 [2024-07-23 04:18:48.541090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.227 [2024-07-23 04:18:48.541109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1746770 is same with the state(5) to be set 00:22:55.227 [2024-07-23 04:18:48.541135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:55.227 [2024-07-23 04:18:48.541144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:55.227 [2024-07-23 04:18:48.541152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73040 len:8 PRP1 0x0 PRP2 0x0 00:22:55.227 [2024-07-23 04:18:48.541161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541220] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1746770 was disconnected and freed. reset controller. 00:22:55.227 [2024-07-23 04:18:48.541311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.227 [2024-07-23 04:18:48.541327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.227 [2024-07-23 04:18:48.541345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.227 [2024-07-23 04:18:48.541363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.227 [2024-07-23 04:18:48.541381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.227 [2024-07-23 04:18:48.541389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1710cb0 is same with the state(5) to be set 00:22:55.227 [2024-07-23 04:18:48.541976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:55.227 [2024-07-23 04:18:48.542026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1710cb0 (9): Bad file descriptor 00:22:55.227 [2024-07-23 04:18:48.542128] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.227 [2024-07-23 04:18:48.542149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1710cb0 with addr=10.0.0.2, port=4420 00:22:55.227 [2024-07-23 04:18:48.542160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1710cb0 is same with the state(5) to be set 00:22:55.227 [2024-07-23 04:18:48.542178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1710cb0 (9): Bad file descriptor 00:22:55.227 [2024-07-23 04:18:48.542193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:55.227 [2024-07-23 04:18:48.542218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:55.227 [2024-07-23 04:18:48.542333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:55.227 [2024-07-23 04:18:48.542583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.227 [2024-07-23 04:18:48.542608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:55.227 04:18:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:56.604 [2024-07-23 04:18:49.542704] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.604 [2024-07-23 04:18:49.542764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1710cb0 with addr=10.0.0.2, port=4420 00:22:56.604 [2024-07-23 04:18:49.542778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1710cb0 is same with the state(5) to be set 00:22:56.604 [2024-07-23 04:18:49.542797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1710cb0 (9): Bad file descriptor 00:22:56.604 [2024-07-23 04:18:49.542813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:56.604 [2024-07-23 04:18:49.542821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:56.604 [2024-07-23 04:18:49.542830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.604 [2024-07-23 04:18:49.542848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.604 [2024-07-23 04:18:49.542858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:57.540 [2024-07-23 04:18:50.542946] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:57.540 [2024-07-23 04:18:50.543016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1710cb0 with addr=10.0.0.2, port=4420 00:22:57.540 [2024-07-23 04:18:50.543041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1710cb0 is same with the state(5) to be set 00:22:57.540 [2024-07-23 04:18:50.543060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1710cb0 (9): Bad file descriptor 00:22:57.540 [2024-07-23 04:18:50.543078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:57.540 [2024-07-23 04:18:50.543087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:57.540 [2024-07-23 04:18:50.543095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:57.540 [2024-07-23 04:18:50.543113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:57.540 [2024-07-23 04:18:50.543123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.490 [2024-07-23 04:18:51.545994] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.490 [2024-07-23 04:18:51.546051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1710cb0 with addr=10.0.0.2, port=4420 00:22:58.490 [2024-07-23 04:18:51.546065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1710cb0 is same with the state(5) to be set 00:22:58.490 [2024-07-23 04:18:51.546274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1710cb0 (9): Bad file descriptor 00:22:58.490 [2024-07-23 04:18:51.546493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:58.490 [2024-07-23 04:18:51.546506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:58.490 [2024-07-23 04:18:51.546514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.490 [2024-07-23 04:18:51.550127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.490 [2024-07-23 04:18:51.550160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.490 04:18:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.490 [2024-07-23 04:18:51.792995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.490 04:18:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98404 00:22:59.426 [2024-07-23 04:18:52.589833] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:04.693 00:23:04.693 Latency(us) 00:23:04.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.693 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:04.693 Verification LBA range: start 0x0 length 0x4000 00:23:04.693 NVMe0n1 : 10.01 6088.65 23.78 4254.91 0.00 12342.35 558.55 3019898.88 00:23:04.693 =================================================================================================================== 00:23:04.693 Total : 6088.65 23.78 4254.91 0.00 12342.35 0.00 3019898.88 00:23:04.693 0 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 98283 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 98283 ']' 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 98283 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98283 00:23:04.693 killing process with pid 98283 00:23:04.693 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.693 00:23:04.693 Latency(us) 00:23:04.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.693 =================================================================================================================== 00:23:04.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98283' 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 98283 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 98283 00:23:04.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98513 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98513 /var/tmp/bdevperf.sock 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 98513 ']' 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.693 04:18:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:04.693 [2024-07-23 04:18:57.714179] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:23:04.693 [2024-07-23 04:18:57.714494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98513 ] 00:23:04.693 [2024-07-23 04:18:57.837231] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:04.693 [2024-07-23 04:18:57.855883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.693 [2024-07-23 04:18:57.922401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.693 [2024-07-23 04:18:57.975811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:05.625 04:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.625 04:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:23:05.625 04:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98513 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:05.625 04:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98529 00:23:05.626 04:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:05.626 04:18:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:05.883 NVMe0n1 00:23:05.883 04:18:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98575 00:23:05.883 04:18:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:05.883 04:18:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:06.141 Running I/O for 10 seconds... 00:23:07.117 04:19:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.117 [2024-07-23 04:19:00.453370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.453984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.453994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.454379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.454791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.117 [2024-07-23 04:19:00.455378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.117 [2024-07-23 04:19:00.455389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.455982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.455991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.456841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.456851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.457343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.457448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.457462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.457471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.457482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.457498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.457510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.457518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.457529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.457538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.457549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.118 [2024-07-23 04:19:00.457572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.457583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aa080 is same with the state(5) to be set 00:23:07.118 [2024-07-23 04:19:00.457594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.118 [2024-07-23 04:19:00.457602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.118 [2024-07-23 04:19:00.457610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52088 len:8 PRP1 0x0 PRP2 0x0 00:23:07.118 [2024-07-23 04:19:00.457619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.118 [2024-07-23 04:19:00.457671] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9aa080 was disconnected and freed. reset controller. 00:23:07.118 [2024-07-23 04:19:00.458169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.118 [2024-07-23 04:19:00.458756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9adb20 (9): Bad file descriptor 00:23:07.118 [2024-07-23 04:19:00.459267] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.376 [2024-07-23 04:19:00.459456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9adb20 with addr=10.0.0.2, port=4420 00:23:07.377 [2024-07-23 04:19:00.459486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9adb20 is same with the state(5) to be set 00:23:07.377 [2024-07-23 04:19:00.459526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9adb20 (9): Bad file descriptor 00:23:07.377 [2024-07-23 04:19:00.459552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.377 [2024-07-23 04:19:00.459566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:07.377 [2024-07-23 04:19:00.459579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.377 [2024-07-23 04:19:00.459609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.377 [2024-07-23 04:19:00.459642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.377 04:19:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98575 00:23:09.302 [2024-07-23 04:19:02.459763] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.302 [2024-07-23 04:19:02.460194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9adb20 with addr=10.0.0.2, port=4420 00:23:09.302 [2024-07-23 04:19:02.460619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9adb20 is same with the state(5) to be set 00:23:09.302 [2024-07-23 04:19:02.461015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9adb20 (9): Bad file descriptor 00:23:09.302 [2024-07-23 04:19:02.461404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.302 [2024-07-23 04:19:02.461776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.302 [2024-07-23 04:19:02.462174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.302 [2024-07-23 04:19:02.462405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.302 [2024-07-23 04:19:02.462609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.201 [2024-07-23 04:19:04.463121] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.201 [2024-07-23 04:19:04.463515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9adb20 with addr=10.0.0.2, port=4420 00:23:11.201 [2024-07-23 04:19:04.463968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9adb20 is same with the state(5) to be set 00:23:11.201 [2024-07-23 04:19:04.464388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9adb20 (9): Bad file descriptor 00:23:11.201 [2024-07-23 04:19:04.464795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.201 [2024-07-23 04:19:04.465188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.201 [2024-07-23 04:19:04.465586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.201 [2024-07-23 04:19:04.465832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.201 [2024-07-23 04:19:04.466048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:13.732 [2024-07-23 04:19:06.466309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.732 [2024-07-23 04:19:06.466666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.732 [2024-07-23 04:19:06.467124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:13.732 [2024-07-23 04:19:06.467527] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:13.732 [2024-07-23 04:19:06.467761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.298 00:23:14.298 Latency(us) 00:23:14.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.298 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:14.298 NVMe0n1 : 8.16 2537.47 9.91 15.69 0.00 50087.41 6821.70 7015926.69 00:23:14.298 =================================================================================================================== 00:23:14.298 Total : 2537.47 9.91 15.69 0.00 50087.41 6821.70 7015926.69 00:23:14.298 0 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:14.298 Attaching 5 probes... 00:23:14.298 1301.320260: reset bdev controller NVMe0 00:23:14.298 1302.322462: reconnect bdev controller NVMe0 00:23:14.298 3302.835574: reconnect delay bdev controller NVMe0 00:23:14.298 3302.851449: reconnect bdev controller NVMe0 00:23:14.298 5306.174771: reconnect delay bdev controller NVMe0 00:23:14.298 5306.210943: reconnect bdev controller NVMe0 00:23:14.298 7309.440665: reconnect delay bdev controller NVMe0 00:23:14.298 7309.455699: reconnect bdev controller NVMe0 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98529 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98513 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 98513 ']' 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 98513 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98513 00:23:14.298 killing process with pid 98513 00:23:14.298 Received shutdown signal, test time was about 8.219327 seconds 00:23:14.298 00:23:14.298 Latency(us) 00:23:14.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.298 =================================================================================================================== 00:23:14.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98513' 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 98513 00:23:14.298 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 98513 00:23:14.556 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.815 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:14.815 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:14.815 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.815 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:23:14.816 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.816 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:23:14.816 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.816 04:19:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.816 rmmod nvme_tcp 00:23:14.816 rmmod nvme_fabrics 00:23:14.816 rmmod nvme_keyring 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 98094 ']' 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 98094 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 98094 ']' 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 98094 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98094 00:23:14.816 killing process with pid 98094 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98094' 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 98094 00:23:14.816 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 98094 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:15.075 00:23:15.075 real 0m45.850s 00:23:15.075 user 2m13.455s 00:23:15.075 sys 0m6.058s 00:23:15.075 ************************************ 00:23:15.075 END TEST nvmf_timeout 00:23:15.075 ************************************ 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:15.075 ************************************ 00:23:15.075 END TEST nvmf_host 00:23:15.075 ************************************ 00:23:15.075 00:23:15.075 real 5m37.199s 00:23:15.075 user 15m49.735s 00:23:15.075 sys 1m18.597s 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.075 04:19:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.075 04:19:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:15.075 ************************************ 00:23:15.075 END TEST nvmf_tcp 00:23:15.075 ************************************ 00:23:15.075 00:23:15.075 real 14m13.632s 00:23:15.075 user 37m47.486s 00:23:15.075 sys 4m4.823s 00:23:15.075 04:19:08 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.075 04:19:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.075 04:19:08 -- common/autotest_common.sh@1142 -- # return 0 00:23:15.075 04:19:08 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:23:15.075 04:19:08 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:15.075 04:19:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:15.075 04:19:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:15.075 04:19:08 -- common/autotest_common.sh@10 -- # set +x 00:23:15.334 ************************************ 00:23:15.334 START TEST nvmf_dif 00:23:15.334 ************************************ 00:23:15.334 04:19:08 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:15.334 * Looking for test storage... 00:23:15.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:15.334 04:19:08 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:15.334 04:19:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.334 04:19:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.334 04:19:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.334 04:19:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.334 04:19:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.334 04:19:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.334 04:19:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:15.334 04:19:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.334 04:19:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:15.334 04:19:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:15.334 04:19:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:15.334 04:19:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:15.334 04:19:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.334 04:19:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:15.334 04:19:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:15.334 Cannot find device "nvmf_tgt_br" 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@155 -- # true 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:15.334 Cannot find device "nvmf_tgt_br2" 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@156 -- # true 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:15.334 Cannot find device "nvmf_tgt_br" 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@158 -- # true 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:15.334 Cannot find device "nvmf_tgt_br2" 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@159 -- # true 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:15.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:15.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:15.334 04:19:08 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:15.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:23:15.593 00:23:15.593 --- 10.0.0.2 ping statistics --- 00:23:15.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.593 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:15.593 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:15.593 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:23:15.593 00:23:15.593 --- 10.0.0.3 ping statistics --- 00:23:15.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.593 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:15.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:15.593 00:23:15.593 --- 10.0.0.1 ping statistics --- 00:23:15.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.593 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:15.593 04:19:08 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:15.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:15.851 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:15.851 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.110 04:19:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:16.110 04:19:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.110 04:19:09 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.110 04:19:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:16.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=99005 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:16.110 04:19:09 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 99005 00:23:16.110 04:19:09 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 99005 ']' 00:23:16.110 04:19:09 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.110 04:19:09 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.110 04:19:09 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.110 04:19:09 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.110 04:19:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:16.110 [2024-07-23 04:19:09.309648] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:23:16.110 [2024-07-23 04:19:09.309928] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.110 [2024-07-23 04:19:09.432717] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:16.110 [2024-07-23 04:19:09.453334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.368 [2024-07-23 04:19:09.518981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.368 [2024-07-23 04:19:09.519323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.368 [2024-07-23 04:19:09.519521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.368 [2024-07-23 04:19:09.519648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.368 [2024-07-23 04:19:09.519663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.368 [2024-07-23 04:19:09.519698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.368 [2024-07-23 04:19:09.575285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:23:17.304 04:19:10 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:17.304 04:19:10 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.304 04:19:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:17.304 04:19:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:17.304 [2024-07-23 04:19:10.345689] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.304 04:19:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.304 04:19:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:17.304 ************************************ 00:23:17.304 START TEST fio_dif_1_default 00:23:17.304 ************************************ 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:17.304 bdev_null0 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:17.304 [2024-07-23 04:19:10.393784] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.304 { 00:23:17.304 "params": { 00:23:17.304 "name": "Nvme$subsystem", 00:23:17.304 "trtype": "$TEST_TRANSPORT", 00:23:17.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.304 "adrfam": "ipv4", 00:23:17.304 "trsvcid": "$NVMF_PORT", 00:23:17.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.304 "hdgst": ${hdgst:-false}, 00:23:17.304 "ddgst": ${ddgst:-false} 00:23:17.304 }, 00:23:17.304 "method": "bdev_nvme_attach_controller" 00:23:17.304 } 00:23:17.304 EOF 00:23:17.304 )") 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:17.304 04:19:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:17.304 "params": { 00:23:17.304 "name": "Nvme0", 00:23:17.304 "trtype": "tcp", 00:23:17.304 "traddr": "10.0.0.2", 00:23:17.305 "adrfam": "ipv4", 00:23:17.305 "trsvcid": "4420", 00:23:17.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:17.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:17.305 "hdgst": false, 00:23:17.305 "ddgst": false 00:23:17.305 }, 00:23:17.305 "method": "bdev_nvme_attach_controller" 00:23:17.305 }' 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:17.305 04:19:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:17.305 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:17.305 fio-3.35 00:23:17.305 Starting 1 thread 00:23:29.542 00:23:29.542 filename0: (groupid=0, jobs=1): err= 0: pid=99072: Tue Jul 23 04:19:21 2024 00:23:29.542 read: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(401MiB/10001msec) 00:23:29.542 slat (usec): min=5, max=107, avg= 7.43, stdev= 3.12 00:23:29.542 clat (usec): min=313, max=3794, avg=367.35, stdev=41.61 00:23:29.542 lat (usec): min=319, max=3823, avg=374.78, stdev=42.32 00:23:29.542 clat percentiles (usec): 00:23:29.542 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 338], 00:23:29.542 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:23:29.542 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 408], 95.00th=[ 429], 00:23:29.542 | 99.00th=[ 482], 99.50th=[ 502], 99.90th=[ 545], 99.95th=[ 562], 00:23:29.542 | 99.99th=[ 1205] 00:23:29.542 bw ( KiB/s): min=38368, max=42368, per=100.00%, avg=41117.37, stdev=905.18, samples=19 00:23:29.542 iops : min= 9592, max=10592, avg=10279.32, stdev=226.31, samples=19 00:23:29.542 lat (usec) : 500=99.46%, 750=0.52%, 1000=0.01% 00:23:29.542 lat (msec) : 2=0.01%, 4=0.01% 00:23:29.542 cpu : usr=83.56%, sys=14.46%, ctx=46, majf=0, minf=0 00:23:29.542 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:29.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.542 issued rwts: total=102728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.542 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:29.542 00:23:29.542 Run status group 0 (all jobs): 00:23:29.542 READ: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=401MiB (421MB), run=10001-10001msec 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 ************************************ 00:23:29.542 END TEST fio_dif_1_default 00:23:29.542 ************************************ 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 00:23:29.542 real 0m10.937s 00:23:29.542 user 0m8.947s 00:23:29.542 sys 0m1.699s 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 04:19:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:29.542 04:19:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:29.542 04:19:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:29.542 04:19:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 ************************************ 00:23:29.542 START TEST fio_dif_1_multi_subsystems 00:23:29.542 ************************************ 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 bdev_null0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 [2024-07-23 04:19:21.386362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 bdev_null1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.542 { 00:23:29.542 "params": { 00:23:29.542 "name": "Nvme$subsystem", 00:23:29.542 "trtype": "$TEST_TRANSPORT", 00:23:29.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.542 "adrfam": "ipv4", 00:23:29.542 "trsvcid": "$NVMF_PORT", 00:23:29.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.542 "hdgst": ${hdgst:-false}, 00:23:29.542 "ddgst": ${ddgst:-false} 00:23:29.542 }, 00:23:29.542 "method": "bdev_nvme_attach_controller" 00:23:29.542 } 00:23:29.542 EOF 00:23:29.542 )") 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:29.542 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.543 { 00:23:29.543 "params": { 00:23:29.543 "name": "Nvme$subsystem", 00:23:29.543 "trtype": "$TEST_TRANSPORT", 00:23:29.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.543 "adrfam": "ipv4", 00:23:29.543 "trsvcid": "$NVMF_PORT", 00:23:29.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.543 "hdgst": ${hdgst:-false}, 00:23:29.543 "ddgst": ${ddgst:-false} 00:23:29.543 }, 00:23:29.543 "method": "bdev_nvme_attach_controller" 00:23:29.543 } 00:23:29.543 EOF 00:23:29.543 )") 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:29.543 "params": { 00:23:29.543 "name": "Nvme0", 00:23:29.543 "trtype": "tcp", 00:23:29.543 "traddr": "10.0.0.2", 00:23:29.543 "adrfam": "ipv4", 00:23:29.543 "trsvcid": "4420", 00:23:29.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:29.543 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:29.543 "hdgst": false, 00:23:29.543 "ddgst": false 00:23:29.543 }, 00:23:29.543 "method": "bdev_nvme_attach_controller" 00:23:29.543 },{ 00:23:29.543 "params": { 00:23:29.543 "name": "Nvme1", 00:23:29.543 "trtype": "tcp", 00:23:29.543 "traddr": "10.0.0.2", 00:23:29.543 "adrfam": "ipv4", 00:23:29.543 "trsvcid": "4420", 00:23:29.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.543 "hdgst": false, 00:23:29.543 "ddgst": false 00:23:29.543 }, 00:23:29.543 "method": "bdev_nvme_attach_controller" 00:23:29.543 }' 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:29.543 04:19:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:29.543 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:29.543 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:29.543 fio-3.35 00:23:29.543 Starting 2 threads 00:23:39.533 00:23:39.533 filename0: (groupid=0, jobs=1): err= 0: pid=99231: Tue Jul 23 04:19:32 2024 00:23:39.533 read: IOPS=5416, BW=21.2MiB/s (22.2MB/s)(212MiB/10001msec) 00:23:39.533 slat (nsec): min=6227, max=88111, avg=12525.38, stdev=4298.87 00:23:39.533 clat (usec): min=558, max=4735, avg=705.46, stdev=57.77 00:23:39.533 lat (usec): min=564, max=4756, avg=717.98, stdev=58.21 00:23:39.533 clat percentiles (usec): 00:23:39.533 | 1.00th=[ 611], 5.00th=[ 644], 10.00th=[ 652], 20.00th=[ 668], 00:23:39.533 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 709], 00:23:39.533 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 758], 95.00th=[ 783], 00:23:39.533 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 955], 00:23:39.533 | 99.99th=[ 1004] 00:23:39.533 bw ( KiB/s): min=21120, max=22080, per=50.08%, avg=21699.37, stdev=273.18, samples=19 00:23:39.533 iops : min= 5280, max= 5520, avg=5424.84, stdev=68.29, samples=19 00:23:39.533 lat (usec) : 750=85.79%, 1000=14.20% 00:23:39.533 lat (msec) : 2=0.01%, 10=0.01% 00:23:39.533 cpu : usr=89.25%, sys=9.38%, ctx=20, majf=0, minf=6 00:23:39.533 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.533 issued rwts: total=54168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.533 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:39.533 filename1: (groupid=0, jobs=1): err= 0: pid=99232: Tue Jul 23 04:19:32 2024 00:23:39.533 read: IOPS=5416, BW=21.2MiB/s (22.2MB/s)(212MiB/10001msec) 00:23:39.533 slat (nsec): min=6212, max=65114, avg=12878.60, stdev=4327.13 00:23:39.533 clat (usec): min=598, max=4211, avg=703.14, stdev=53.00 00:23:39.533 lat (usec): min=605, max=4236, avg=716.02, stdev=53.48 00:23:39.533 clat percentiles (usec): 00:23:39.533 | 1.00th=[ 627], 5.00th=[ 644], 10.00th=[ 652], 20.00th=[ 668], 00:23:39.533 | 30.00th=[ 676], 40.00th=[ 685], 50.00th=[ 701], 60.00th=[ 709], 00:23:39.533 | 70.00th=[ 717], 80.00th=[ 734], 90.00th=[ 758], 95.00th=[ 783], 00:23:39.533 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 922], 99.95th=[ 947], 00:23:39.533 | 99.99th=[ 1004] 00:23:39.533 bw ( KiB/s): min=21162, max=22048, per=50.08%, avg=21701.58, stdev=266.23, samples=19 00:23:39.533 iops : min= 5290, max= 5512, avg=5425.37, stdev=66.61, samples=19 00:23:39.533 lat (usec) : 750=87.68%, 1000=12.31% 00:23:39.533 lat (msec) : 2=0.01%, 10=0.01% 00:23:39.533 cpu : usr=89.80%, sys=8.84%, ctx=15, majf=0, minf=0 00:23:39.533 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.533 issued rwts: total=54168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.534 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:39.534 00:23:39.534 Run status group 0 (all jobs): 00:23:39.534 READ: bw=42.3MiB/s (44.4MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=423MiB (444MB), run=10001-10001msec 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 ************************************ 00:23:39.534 END TEST fio_dif_1_multi_subsystems 00:23:39.534 ************************************ 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.534 00:23:39.534 real 0m11.048s 00:23:39.534 user 0m18.613s 00:23:39.534 sys 0m2.095s 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 04:19:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:39.534 04:19:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:39.534 04:19:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:39.534 04:19:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 ************************************ 00:23:39.534 START TEST fio_dif_rand_params 00:23:39.534 ************************************ 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 bdev_null0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.534 [2024-07-23 04:19:32.486110] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.534 { 00:23:39.534 "params": { 00:23:39.534 "name": "Nvme$subsystem", 00:23:39.534 "trtype": "$TEST_TRANSPORT", 00:23:39.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.534 "adrfam": "ipv4", 00:23:39.534 "trsvcid": "$NVMF_PORT", 00:23:39.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.534 "hdgst": ${hdgst:-false}, 00:23:39.534 "ddgst": ${ddgst:-false} 00:23:39.534 }, 00:23:39.534 "method": "bdev_nvme_attach_controller" 00:23:39.534 } 00:23:39.534 EOF 00:23:39.534 )") 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:39.534 "params": { 00:23:39.534 "name": "Nvme0", 00:23:39.534 "trtype": "tcp", 00:23:39.534 "traddr": "10.0.0.2", 00:23:39.534 "adrfam": "ipv4", 00:23:39.534 "trsvcid": "4420", 00:23:39.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:39.534 "hdgst": false, 00:23:39.534 "ddgst": false 00:23:39.534 }, 00:23:39.534 "method": "bdev_nvme_attach_controller" 00:23:39.534 }' 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:39.534 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:39.535 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:39.535 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:39.535 04:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:39.535 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:39.535 ... 00:23:39.535 fio-3.35 00:23:39.535 Starting 3 threads 00:23:46.105 00:23:46.105 filename0: (groupid=0, jobs=1): err= 0: pid=99388: Tue Jul 23 04:19:38 2024 00:23:46.105 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(178MiB/5006msec) 00:23:46.105 slat (usec): min=6, max=107, avg=17.47, stdev= 7.92 00:23:46.105 clat (usec): min=10004, max=12608, avg=10521.26, stdev=334.04 00:23:46.105 lat (usec): min=10027, max=12634, avg=10538.72, stdev=334.54 00:23:46.105 clat percentiles (usec): 00:23:46.105 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:23:46.105 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10421], 00:23:46.105 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:46.105 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12649], 99.95th=[12649], 00:23:46.105 | 99.99th=[12649] 00:23:46.105 bw ( KiB/s): min=36096, max=36864, per=33.43%, avg=36445.33, stdev=397.83, samples=9 00:23:46.105 iops : min= 282, max= 288, avg=284.67, stdev= 3.16, samples=9 00:23:46.105 lat (msec) : 20=100.00% 00:23:46.105 cpu : usr=92.23%, sys=7.05%, ctx=132, majf=0, minf=9 00:23:46.105 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.105 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.105 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:46.105 filename0: (groupid=0, jobs=1): err= 0: pid=99389: Tue Jul 23 04:19:38 2024 00:23:46.105 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(178MiB/5007msec) 00:23:46.105 slat (nsec): min=6489, max=75847, avg=18322.95, stdev=9356.13 00:23:46.105 clat (usec): min=9952, max=13777, avg=10523.02, stdev=353.62 00:23:46.105 lat (usec): min=9965, max=13831, avg=10541.34, stdev=354.59 00:23:46.105 clat percentiles (usec): 00:23:46.105 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:23:46.105 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10421], 00:23:46.105 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:46.105 | 99.00th=[11731], 99.50th=[11863], 99.90th=[13829], 99.95th=[13829], 00:23:46.105 | 99.99th=[13829] 00:23:46.105 bw ( KiB/s): min=36096, max=36864, per=33.42%, avg=36437.33, stdev=404.77, samples=9 00:23:46.105 iops : min= 282, max= 288, avg=284.67, stdev= 3.16, samples=9 00:23:46.105 lat (msec) : 10=0.21%, 20=99.79% 00:23:46.105 cpu : usr=92.25%, sys=7.17%, ctx=10, majf=0, minf=0 00:23:46.105 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.105 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.105 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:46.105 filename0: (groupid=0, jobs=1): err= 0: pid=99390: Tue Jul 23 04:19:38 2024 00:23:46.105 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(178MiB/5009msec) 00:23:46.105 slat (nsec): min=6236, max=75595, avg=16913.34, stdev=9979.91 00:23:46.105 clat (usec): min=9954, max=15362, avg=10530.01, stdev=389.98 00:23:46.105 lat (usec): min=9967, max=15386, avg=10546.92, stdev=390.66 00:23:46.105 clat percentiles (usec): 00:23:46.105 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:23:46.105 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10421], 00:23:46.105 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:46.105 | 99.00th=[11731], 99.50th=[11731], 99.90th=[15401], 99.95th=[15401], 00:23:46.105 | 99.99th=[15401] 00:23:46.105 bw ( KiB/s): min=35328, max=36864, per=33.32%, avg=36326.40, stdev=518.36, samples=10 00:23:46.105 iops : min= 276, max= 288, avg=283.80, stdev= 4.05, samples=10 00:23:46.105 lat (msec) : 10=0.21%, 20=99.79% 00:23:46.105 cpu : usr=92.55%, sys=6.89%, ctx=8, majf=0, minf=0 00:23:46.105 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.105 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.105 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:46.105 00:23:46.105 Run status group 0 (all jobs): 00:23:46.105 READ: bw=106MiB/s (112MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=533MiB (559MB), run=5006-5009msec 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.105 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 bdev_null0 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 [2024-07-23 04:19:38.407404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 bdev_null1 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 bdev_null2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.106 { 00:23:46.106 "params": { 00:23:46.106 "name": "Nvme$subsystem", 00:23:46.106 "trtype": "$TEST_TRANSPORT", 00:23:46.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.106 "adrfam": "ipv4", 00:23:46.106 "trsvcid": "$NVMF_PORT", 00:23:46.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.106 "hdgst": ${hdgst:-false}, 00:23:46.106 "ddgst": ${ddgst:-false} 00:23:46.106 }, 00:23:46.106 "method": "bdev_nvme_attach_controller" 00:23:46.106 } 00:23:46.106 EOF 00:23:46.106 )") 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.106 { 00:23:46.106 "params": { 00:23:46.106 "name": "Nvme$subsystem", 00:23:46.106 "trtype": "$TEST_TRANSPORT", 00:23:46.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.106 "adrfam": "ipv4", 00:23:46.106 "trsvcid": "$NVMF_PORT", 00:23:46.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.106 "hdgst": ${hdgst:-false}, 00:23:46.106 "ddgst": ${ddgst:-false} 00:23:46.106 }, 00:23:46.106 "method": "bdev_nvme_attach_controller" 00:23:46.106 } 00:23:46.106 EOF 00:23:46.106 )") 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:46.106 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.107 { 00:23:46.107 "params": { 00:23:46.107 "name": "Nvme$subsystem", 00:23:46.107 "trtype": "$TEST_TRANSPORT", 00:23:46.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.107 "adrfam": "ipv4", 00:23:46.107 "trsvcid": "$NVMF_PORT", 00:23:46.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.107 "hdgst": ${hdgst:-false}, 00:23:46.107 "ddgst": ${ddgst:-false} 00:23:46.107 }, 00:23:46.107 "method": "bdev_nvme_attach_controller" 00:23:46.107 } 00:23:46.107 EOF 00:23:46.107 )") 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:46.107 "params": { 00:23:46.107 "name": "Nvme0", 00:23:46.107 "trtype": "tcp", 00:23:46.107 "traddr": "10.0.0.2", 00:23:46.107 "adrfam": "ipv4", 00:23:46.107 "trsvcid": "4420", 00:23:46.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.107 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:46.107 "hdgst": false, 00:23:46.107 "ddgst": false 00:23:46.107 }, 00:23:46.107 "method": "bdev_nvme_attach_controller" 00:23:46.107 },{ 00:23:46.107 "params": { 00:23:46.107 "name": "Nvme1", 00:23:46.107 "trtype": "tcp", 00:23:46.107 "traddr": "10.0.0.2", 00:23:46.107 "adrfam": "ipv4", 00:23:46.107 "trsvcid": "4420", 00:23:46.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.107 "hdgst": false, 00:23:46.107 "ddgst": false 00:23:46.107 }, 00:23:46.107 "method": "bdev_nvme_attach_controller" 00:23:46.107 },{ 00:23:46.107 "params": { 00:23:46.107 "name": "Nvme2", 00:23:46.107 "trtype": "tcp", 00:23:46.107 "traddr": "10.0.0.2", 00:23:46.107 "adrfam": "ipv4", 00:23:46.107 "trsvcid": "4420", 00:23:46.107 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:46.107 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:46.107 "hdgst": false, 00:23:46.107 "ddgst": false 00:23:46.107 }, 00:23:46.107 "method": "bdev_nvme_attach_controller" 00:23:46.107 }' 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:46.107 04:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.107 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:46.107 ... 00:23:46.107 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:46.107 ... 00:23:46.107 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:46.107 ... 00:23:46.107 fio-3.35 00:23:46.107 Starting 24 threads 00:23:56.079 00:23:56.079 filename0: (groupid=0, jobs=1): err= 0: pid=99489: Tue Jul 23 04:19:49 2024 00:23:56.079 read: IOPS=233, BW=933KiB/s (956kB/s)(9344KiB/10013msec) 00:23:56.079 slat (usec): min=5, max=8034, avg=35.94, stdev=291.46 00:23:56.079 clat (msec): min=24, max=135, avg=68.40, stdev=19.90 00:23:56.079 lat (msec): min=24, max=135, avg=68.44, stdev=19.89 00:23:56.079 clat percentiles (msec): 00:23:56.079 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 48], 00:23:56.079 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 71], 00:23:56.079 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 100], 95.00th=[ 109], 00:23:56.079 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 136], 99.95th=[ 136], 00:23:56.079 | 99.99th=[ 136] 00:23:56.079 bw ( KiB/s): min= 688, max= 1120, per=4.32%, avg=930.15, stdev=120.44, samples=20 00:23:56.079 iops : min= 172, max= 280, avg=232.50, stdev=30.07, samples=20 00:23:56.080 lat (msec) : 50=23.72%, 100=67.04%, 250=9.25% 00:23:56.080 cpu : usr=38.23%, sys=1.42%, ctx=1131, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename0: (groupid=0, jobs=1): err= 0: pid=99490: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=227, BW=908KiB/s (930kB/s)(9112KiB/10032msec) 00:23:56.080 slat (usec): min=3, max=6055, avg=27.69, stdev=207.54 00:23:56.080 clat (msec): min=34, max=135, avg=70.29, stdev=20.29 00:23:56.080 lat (msec): min=34, max=135, avg=70.32, stdev=20.30 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 40], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 50], 00:23:56.080 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:23:56.080 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 110], 00:23:56.080 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 136], 00:23:56.080 | 99.99th=[ 136] 00:23:56.080 bw ( KiB/s): min= 632, max= 1111, per=4.21%, avg=906.50, stdev=136.12, samples=20 00:23:56.080 iops : min= 158, max= 277, avg=226.55, stdev=33.94, samples=20 00:23:56.080 lat (msec) : 50=21.12%, 100=66.81%, 250=12.07% 00:23:56.080 cpu : usr=38.62%, sys=1.71%, ctx=1812, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename0: (groupid=0, jobs=1): err= 0: pid=99491: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=230, BW=921KiB/s (943kB/s)(9244KiB/10033msec) 00:23:56.080 slat (usec): min=7, max=9031, avg=31.46, stdev=344.21 00:23:56.080 clat (msec): min=23, max=133, avg=69.29, stdev=19.75 00:23:56.080 lat (msec): min=23, max=133, avg=69.32, stdev=19.76 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 48], 00:23:56.080 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:23:56.080 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 109], 00:23:56.080 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 134], 99.95th=[ 134], 00:23:56.080 | 99.99th=[ 134] 00:23:56.080 bw ( KiB/s): min= 688, max= 1095, per=4.26%, avg=917.15, stdev=128.67, samples=20 00:23:56.080 iops : min= 172, max= 273, avg=229.25, stdev=32.11, samples=20 00:23:56.080 lat (msec) : 50=23.80%, 100=66.94%, 250=9.26% 00:23:56.080 cpu : usr=35.01%, sys=1.49%, ctx=946, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename0: (groupid=0, jobs=1): err= 0: pid=99492: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=210, BW=842KiB/s (863kB/s)(8448KiB/10028msec) 00:23:56.080 slat (usec): min=4, max=8054, avg=41.91, stdev=435.46 00:23:56.080 clat (msec): min=35, max=145, avg=75.73, stdev=20.14 00:23:56.080 lat (msec): min=35, max=145, avg=75.77, stdev=20.15 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 62], 00:23:56.080 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:23:56.080 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 116], 00:23:56.080 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 146], 00:23:56.080 | 99.99th=[ 146] 00:23:56.080 bw ( KiB/s): min= 512, max= 1024, per=3.89%, avg=838.45, stdev=127.04, samples=20 00:23:56.080 iops : min= 128, max= 256, avg=209.60, stdev=31.75, samples=20 00:23:56.080 lat (msec) : 50=9.61%, 100=76.23%, 250=14.16% 00:23:56.080 cpu : usr=34.54%, sys=1.40%, ctx=967, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=2.7%, 4=10.6%, 8=72.0%, 16=14.7%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename0: (groupid=0, jobs=1): err= 0: pid=99493: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=226, BW=908KiB/s (929kB/s)(9108KiB/10034msec) 00:23:56.080 slat (usec): min=5, max=4062, avg=24.50, stdev=170.15 00:23:56.080 clat (msec): min=22, max=151, avg=70.34, stdev=20.49 00:23:56.080 lat (msec): min=22, max=151, avg=70.36, stdev=20.49 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 51], 00:23:56.080 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 69], 60.00th=[ 72], 00:23:56.080 | 70.00th=[ 77], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 110], 00:23:56.080 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 131], 99.95th=[ 131], 00:23:56.080 | 99.99th=[ 153] 00:23:56.080 bw ( KiB/s): min= 680, max= 1144, per=4.20%, avg=905.90, stdev=142.23, samples=20 00:23:56.080 iops : min= 170, max= 286, avg=226.40, stdev=35.47, samples=20 00:23:56.080 lat (msec) : 50=19.94%, 100=67.90%, 250=12.17% 00:23:56.080 cpu : usr=44.03%, sys=1.67%, ctx=1452, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename0: (groupid=0, jobs=1): err= 0: pid=99494: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=211, BW=845KiB/s (865kB/s)(8468KiB/10025msec) 00:23:56.080 slat (usec): min=5, max=8035, avg=21.52, stdev=175.89 00:23:56.080 clat (msec): min=25, max=142, avg=75.58, stdev=19.78 00:23:56.080 lat (msec): min=25, max=142, avg=75.61, stdev=19.78 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:23:56.080 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:23:56.080 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 110], 00:23:56.080 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 142], 00:23:56.080 | 99.99th=[ 142] 00:23:56.080 bw ( KiB/s): min= 632, max= 1000, per=3.90%, avg=839.90, stdev=111.34, samples=20 00:23:56.080 iops : min= 158, max= 250, avg=209.95, stdev=27.85, samples=20 00:23:56.080 lat (msec) : 50=12.71%, 100=73.93%, 250=13.37% 00:23:56.080 cpu : usr=36.30%, sys=1.48%, ctx=1268, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=2.3%, 4=9.0%, 8=73.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=89.8%, 8=8.3%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename0: (groupid=0, jobs=1): err= 0: pid=99495: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=230, BW=923KiB/s (945kB/s)(9228KiB/10002msec) 00:23:56.080 slat (usec): min=3, max=10047, avg=41.94, stdev=415.12 00:23:56.080 clat (msec): min=4, max=134, avg=69.21, stdev=20.37 00:23:56.080 lat (msec): min=4, max=134, avg=69.25, stdev=20.37 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 48], 00:23:56.080 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:23:56.080 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 104], 95.00th=[ 110], 00:23:56.080 | 99.00th=[ 125], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 136], 00:23:56.080 | 99.99th=[ 136] 00:23:56.080 bw ( KiB/s): min= 712, max= 1080, per=4.23%, avg=910.74, stdev=101.87, samples=19 00:23:56.080 iops : min= 178, max= 270, avg=227.68, stdev=25.47, samples=19 00:23:56.080 lat (msec) : 10=0.26%, 20=0.17%, 50=22.76%, 100=66.28%, 250=10.53% 00:23:56.080 cpu : usr=34.95%, sys=1.33%, ctx=1090, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename0: (groupid=0, jobs=1): err= 0: pid=99496: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=223, BW=893KiB/s (915kB/s)(8968KiB/10041msec) 00:23:56.080 slat (usec): min=3, max=4032, avg=20.91, stdev=146.81 00:23:56.080 clat (msec): min=5, max=144, avg=71.49, stdev=21.44 00:23:56.080 lat (msec): min=5, max=144, avg=71.51, stdev=21.43 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 7], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 54], 00:23:56.080 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:23:56.080 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 109], 00:23:56.080 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 144], 00:23:56.080 | 99.99th=[ 144] 00:23:56.080 bw ( KiB/s): min= 664, max= 1280, per=4.13%, avg=889.95, stdev=146.55, samples=20 00:23:56.080 iops : min= 166, max= 320, avg=222.45, stdev=36.59, samples=20 00:23:56.080 lat (msec) : 10=2.14%, 50=15.74%, 100=70.52%, 250=11.60% 00:23:56.080 cpu : usr=39.97%, sys=1.45%, ctx=1096, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=79.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename1: (groupid=0, jobs=1): err= 0: pid=99497: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=230, BW=923KiB/s (945kB/s)(9268KiB/10044msec) 00:23:56.080 slat (usec): min=4, max=4024, avg=24.62, stdev=179.20 00:23:56.080 clat (msec): min=2, max=154, avg=69.19, stdev=23.28 00:23:56.080 lat (msec): min=2, max=154, avg=69.21, stdev=23.27 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 7], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 50], 00:23:56.080 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 72], 00:23:56.080 | 70.00th=[ 77], 80.00th=[ 88], 90.00th=[ 103], 95.00th=[ 110], 00:23:56.080 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 136], 00:23:56.080 | 99.99th=[ 155] 00:23:56.080 bw ( KiB/s): min= 664, max= 1808, per=4.28%, avg=921.60, stdev=243.64, samples=20 00:23:56.080 iops : min= 166, max= 452, avg=230.40, stdev=60.91, samples=20 00:23:56.080 lat (msec) : 4=0.09%, 10=2.68%, 20=0.91%, 50=16.70%, 100=67.41% 00:23:56.080 lat (msec) : 250=12.21% 00:23:56.080 cpu : usr=45.76%, sys=1.78%, ctx=1370, majf=0, minf=0 00:23:56.080 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.7%, 16=16.7%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename1: (groupid=0, jobs=1): err= 0: pid=99498: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=224, BW=898KiB/s (919kB/s)(9020KiB/10048msec) 00:23:56.080 slat (usec): min=3, max=8033, avg=29.12, stdev=249.83 00:23:56.080 clat (msec): min=19, max=137, avg=71.09, stdev=20.29 00:23:56.080 lat (msec): min=19, max=137, avg=71.12, stdev=20.29 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 50], 00:23:56.080 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 72], 00:23:56.080 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 102], 95.00th=[ 111], 00:23:56.080 | 99.00th=[ 120], 99.50th=[ 127], 99.90th=[ 138], 99.95th=[ 138], 00:23:56.080 | 99.99th=[ 138] 00:23:56.080 bw ( KiB/s): min= 656, max= 1128, per=4.16%, avg=895.35, stdev=137.36, samples=20 00:23:56.080 iops : min= 164, max= 282, avg=223.80, stdev=34.29, samples=20 00:23:56.080 lat (msec) : 20=0.71%, 50=19.87%, 100=69.09%, 250=10.33% 00:23:56.080 cpu : usr=39.57%, sys=1.60%, ctx=1152, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.2%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename1: (groupid=0, jobs=1): err= 0: pid=99499: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=211, BW=845KiB/s (866kB/s)(8480KiB/10032msec) 00:23:56.080 slat (usec): min=7, max=8024, avg=32.96, stdev=351.55 00:23:56.080 clat (msec): min=37, max=143, avg=75.48, stdev=20.09 00:23:56.080 lat (msec): min=37, max=143, avg=75.51, stdev=20.10 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 60], 00:23:56.080 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:23:56.080 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 112], 00:23:56.080 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 144], 00:23:56.080 | 99.99th=[ 144] 00:23:56.080 bw ( KiB/s): min= 584, max= 1048, per=3.91%, avg=843.65, stdev=113.16, samples=20 00:23:56.080 iops : min= 146, max= 262, avg=210.90, stdev=28.29, samples=20 00:23:56.080 lat (msec) : 50=12.31%, 100=73.87%, 250=13.82% 00:23:56.080 cpu : usr=36.40%, sys=1.25%, ctx=1079, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=76.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=89.3%, 8=9.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename1: (groupid=0, jobs=1): err= 0: pid=99500: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=228, BW=912KiB/s (934kB/s)(9132KiB/10010msec) 00:23:56.080 slat (usec): min=4, max=8071, avg=28.54, stdev=290.86 00:23:56.080 clat (msec): min=11, max=137, avg=70.00, stdev=20.89 00:23:56.080 lat (msec): min=11, max=137, avg=70.03, stdev=20.89 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 48], 00:23:56.080 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:23:56.080 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 111], 00:23:56.080 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 138], 99.95th=[ 138], 00:23:56.080 | 99.99th=[ 138] 00:23:56.080 bw ( KiB/s): min= 641, max= 1096, per=4.21%, avg=906.85, stdev=133.41, samples=20 00:23:56.080 iops : min= 160, max= 274, avg=226.70, stdev=33.38, samples=20 00:23:56.080 lat (msec) : 20=0.26%, 50=22.21%, 100=66.45%, 250=11.08% 00:23:56.080 cpu : usr=37.37%, sys=1.29%, ctx=1138, majf=0, minf=9 00:23:56.080 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:56.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.080 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.080 filename1: (groupid=0, jobs=1): err= 0: pid=99501: Tue Jul 23 04:19:49 2024 00:23:56.080 read: IOPS=220, BW=881KiB/s (902kB/s)(8852KiB/10049msec) 00:23:56.080 slat (usec): min=4, max=8021, avg=25.61, stdev=240.45 00:23:56.080 clat (msec): min=5, max=179, avg=72.43, stdev=22.30 00:23:56.080 lat (msec): min=5, max=179, avg=72.45, stdev=22.30 00:23:56.080 clat percentiles (msec): 00:23:56.080 | 1.00th=[ 6], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:23:56.080 | 30.00th=[ 65], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:23:56.080 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 106], 95.00th=[ 110], 00:23:56.080 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 140], 00:23:56.080 | 99.99th=[ 180] 00:23:56.080 bw ( KiB/s): min= 632, max= 1424, per=4.08%, avg=878.80, stdev=168.84, samples=20 00:23:56.080 iops : min= 158, max= 356, avg=219.70, stdev=42.21, samples=20 00:23:56.080 lat (msec) : 10=2.80%, 20=0.18%, 50=12.34%, 100=72.75%, 250=11.93% 00:23:56.080 cpu : usr=39.41%, sys=1.51%, ctx=1200, majf=0, minf=0 00:23:56.080 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=74.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename1: (groupid=0, jobs=1): err= 0: pid=99502: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=226, BW=904KiB/s (926kB/s)(9064KiB/10026msec) 00:23:56.081 slat (usec): min=4, max=12023, avg=32.94, stdev=385.35 00:23:56.081 clat (msec): min=25, max=153, avg=70.62, stdev=20.64 00:23:56.081 lat (msec): min=25, max=153, avg=70.65, stdev=20.64 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:23:56.081 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 110], 00:23:56.081 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 153], 00:23:56.081 | 99.99th=[ 155] 00:23:56.081 bw ( KiB/s): min= 664, max= 1125, per=4.19%, avg=902.70, stdev=122.26, samples=20 00:23:56.081 iops : min= 166, max= 281, avg=225.65, stdev=30.55, samples=20 00:23:56.081 lat (msec) : 50=21.45%, 100=67.56%, 250=10.99% 00:23:56.081 cpu : usr=32.33%, sys=1.15%, ctx=954, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename1: (groupid=0, jobs=1): err= 0: pid=99503: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=222, BW=888KiB/s (910kB/s)(8892KiB/10011msec) 00:23:56.081 slat (usec): min=3, max=8051, avg=34.22, stdev=294.94 00:23:56.081 clat (msec): min=7, max=173, avg=71.89, stdev=19.88 00:23:56.081 lat (msec): min=11, max=173, avg=71.92, stdev=19.87 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 57], 00:23:56.081 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 102], 95.00th=[ 110], 00:23:56.081 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 174], 00:23:56.081 | 99.99th=[ 174] 00:23:56.081 bw ( KiB/s): min= 712, max= 1048, per=4.10%, avg=883.60, stdev=113.06, samples=20 00:23:56.081 iops : min= 178, max= 262, avg=220.90, stdev=28.27, samples=20 00:23:56.081 lat (msec) : 10=0.04%, 20=0.13%, 50=15.20%, 100=74.00%, 250=10.62% 00:23:56.081 cpu : usr=41.90%, sys=1.75%, ctx=1081, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=76.6%, 16=14.9%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename1: (groupid=0, jobs=1): err= 0: pid=99504: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=225, BW=901KiB/s (923kB/s)(9028KiB/10020msec) 00:23:56.081 slat (usec): min=3, max=9042, avg=31.29, stdev=281.49 00:23:56.081 clat (msec): min=22, max=189, avg=70.87, stdev=20.24 00:23:56.081 lat (msec): min=22, max=189, avg=70.90, stdev=20.23 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 52], 00:23:56.081 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 69], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 108], 00:23:56.081 | 99.00th=[ 127], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 190], 00:23:56.081 | 99.99th=[ 190] 00:23:56.081 bw ( KiB/s): min= 664, max= 1032, per=4.16%, avg=896.35, stdev=112.35, samples=20 00:23:56.081 iops : min= 166, max= 258, avg=224.05, stdev=28.10, samples=20 00:23:56.081 lat (msec) : 50=18.61%, 100=71.64%, 250=9.75% 00:23:56.081 cpu : usr=37.91%, sys=1.45%, ctx=1149, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename2: (groupid=0, jobs=1): err= 0: pid=99505: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=220, BW=881KiB/s (902kB/s)(8832KiB/10026msec) 00:23:56.081 slat (usec): min=6, max=8030, avg=27.19, stdev=210.95 00:23:56.081 clat (msec): min=35, max=142, avg=72.48, stdev=20.10 00:23:56.081 lat (msec): min=35, max=142, avg=72.51, stdev=20.11 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:23:56.081 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 110], 00:23:56.081 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 140], 99.95th=[ 142], 00:23:56.081 | 99.99th=[ 142] 00:23:56.081 bw ( KiB/s): min= 632, max= 1080, per=4.08%, avg=879.15, stdev=122.38, samples=20 00:23:56.081 iops : min= 158, max= 270, avg=219.75, stdev=30.57, samples=20 00:23:56.081 lat (msec) : 50=16.62%, 100=71.20%, 250=12.18% 00:23:56.081 cpu : usr=34.83%, sys=1.28%, ctx=1046, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename2: (groupid=0, jobs=1): err= 0: pid=99506: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=231, BW=926KiB/s (948kB/s)(9264KiB/10007msec) 00:23:56.081 slat (usec): min=5, max=8045, avg=50.13, stdev=456.90 00:23:56.081 clat (msec): min=7, max=174, avg=68.94, stdev=20.65 00:23:56.081 lat (msec): min=7, max=174, avg=68.99, stdev=20.66 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 48], 00:23:56.081 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 103], 95.00th=[ 109], 00:23:56.081 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 176], 00:23:56.081 | 99.99th=[ 176] 00:23:56.081 bw ( KiB/s): min= 712, max= 1128, per=4.25%, avg=915.79, stdev=119.86, samples=19 00:23:56.081 iops : min= 178, max= 282, avg=228.95, stdev=29.97, samples=19 00:23:56.081 lat (msec) : 10=0.13%, 20=0.30%, 50=22.88%, 100=66.45%, 250=10.23% 00:23:56.081 cpu : usr=38.12%, sys=1.45%, ctx=1042, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename2: (groupid=0, jobs=1): err= 0: pid=99507: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=214, BW=856KiB/s (877kB/s)(8592KiB/10034msec) 00:23:56.081 slat (usec): min=5, max=8030, avg=23.00, stdev=244.65 00:23:56.081 clat (msec): min=36, max=153, avg=74.61, stdev=19.44 00:23:56.081 lat (msec): min=36, max=153, avg=74.63, stdev=19.44 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:23:56.081 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:23:56.081 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 109], 00:23:56.081 | 99.00th=[ 123], 99.50th=[ 123], 99.90th=[ 142], 99.95th=[ 146], 00:23:56.081 | 99.99th=[ 155] 00:23:56.081 bw ( KiB/s): min= 640, max= 1024, per=3.96%, avg=852.05, stdev=107.89, samples=20 00:23:56.081 iops : min= 160, max= 256, avg=213.00, stdev=26.97, samples=20 00:23:56.081 lat (msec) : 50=13.22%, 100=75.05%, 250=11.73% 00:23:56.081 cpu : usr=33.13%, sys=1.52%, ctx=947, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=75.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=89.5%, 8=9.0%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename2: (groupid=0, jobs=1): err= 0: pid=99508: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=229, BW=918KiB/s (940kB/s)(9184KiB/10002msec) 00:23:56.081 slat (usec): min=4, max=8030, avg=34.93, stdev=274.50 00:23:56.081 clat (usec): min=1760, max=165782, avg=69548.70, stdev=21220.83 00:23:56.081 lat (usec): min=1768, max=165795, avg=69583.63, stdev=21215.23 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 49], 00:23:56.081 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 101], 95.00th=[ 109], 00:23:56.081 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 136], 99.95th=[ 167], 00:23:56.081 | 99.99th=[ 167] 00:23:56.081 bw ( KiB/s): min= 720, max= 1024, per=4.17%, avg=898.11, stdev=105.76, samples=19 00:23:56.081 iops : min= 180, max= 256, avg=224.53, stdev=26.44, samples=19 00:23:56.081 lat (msec) : 2=0.26%, 4=0.39%, 10=0.30%, 20=0.30%, 50=20.47% 00:23:56.081 lat (msec) : 100=68.42%, 250=9.84% 00:23:56.081 cpu : usr=40.40%, sys=1.48%, ctx=1143, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=88.0%, 8=10.9%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename2: (groupid=0, jobs=1): err= 0: pid=99509: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=230, BW=922KiB/s (944kB/s)(9264KiB/10048msec) 00:23:56.081 slat (usec): min=4, max=9043, avg=33.98, stdev=303.14 00:23:56.081 clat (msec): min=4, max=147, avg=69.15, stdev=23.06 00:23:56.081 lat (msec): min=4, max=147, avg=69.18, stdev=23.06 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 6], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 49], 00:23:56.081 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 71], 00:23:56.081 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 111], 00:23:56.081 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 138], 00:23:56.081 | 99.99th=[ 148] 00:23:56.081 bw ( KiB/s): min= 656, max= 1523, per=4.27%, avg=920.15, stdev=192.60, samples=20 00:23:56.081 iops : min= 164, max= 380, avg=230.00, stdev=48.03, samples=20 00:23:56.081 lat (msec) : 10=2.16%, 20=0.60%, 50=20.21%, 100=64.12%, 250=12.91% 00:23:56.081 cpu : usr=40.20%, sys=1.72%, ctx=1795, majf=0, minf=0 00:23:56.081 IO depths : 1=0.1%, 2=0.9%, 4=3.2%, 8=79.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename2: (groupid=0, jobs=1): err= 0: pid=99510: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=231, BW=928KiB/s (950kB/s)(9296KiB/10021msec) 00:23:56.081 slat (usec): min=3, max=9023, avg=34.37, stdev=358.47 00:23:56.081 clat (msec): min=24, max=135, avg=68.84, stdev=20.17 00:23:56.081 lat (msec): min=24, max=135, avg=68.87, stdev=20.19 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:23:56.081 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 108], 00:23:56.081 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 136], 00:23:56.081 | 99.99th=[ 136] 00:23:56.081 bw ( KiB/s): min= 664, max= 1152, per=4.29%, avg=923.05, stdev=127.49, samples=20 00:23:56.081 iops : min= 166, max= 288, avg=230.75, stdev=31.86, samples=20 00:23:56.081 lat (msec) : 50=23.92%, 100=66.74%, 250=9.34% 00:23:56.081 cpu : usr=35.60%, sys=1.32%, ctx=1075, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename2: (groupid=0, jobs=1): err= 0: pid=99511: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=230, BW=923KiB/s (945kB/s)(9236KiB/10003msec) 00:23:56.081 slat (usec): min=5, max=9025, avg=36.29, stdev=379.37 00:23:56.081 clat (msec): min=5, max=163, avg=69.17, stdev=20.38 00:23:56.081 lat (msec): min=5, max=163, avg=69.20, stdev=20.40 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 48], 00:23:56.081 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 101], 95.00th=[ 108], 00:23:56.081 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 163], 00:23:56.081 | 99.99th=[ 165] 00:23:56.081 bw ( KiB/s): min= 712, max= 1040, per=4.23%, avg=910.32, stdev=108.47, samples=19 00:23:56.081 iops : min= 178, max= 260, avg=227.58, stdev=27.12, samples=19 00:23:56.081 lat (msec) : 10=0.43%, 20=0.26%, 50=21.78%, 100=67.43%, 250=10.09% 00:23:56.081 cpu : usr=34.88%, sys=1.34%, ctx=987, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=80.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 filename2: (groupid=0, jobs=1): err= 0: pid=99512: Tue Jul 23 04:19:49 2024 00:23:56.081 read: IOPS=227, BW=909KiB/s (931kB/s)(9112KiB/10026msec) 00:23:56.081 slat (usec): min=6, max=9034, avg=24.13, stdev=252.89 00:23:56.081 clat (msec): min=34, max=140, avg=70.29, stdev=20.06 00:23:56.081 lat (msec): min=34, max=140, avg=70.31, stdev=20.07 00:23:56.081 clat percentiles (msec): 00:23:56.081 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 48], 00:23:56.081 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:23:56.081 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 109], 00:23:56.081 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 138], 99.95th=[ 140], 00:23:56.081 | 99.99th=[ 142] 00:23:56.081 bw ( KiB/s): min= 688, max= 1141, per=4.20%, avg=904.65, stdev=126.72, samples=20 00:23:56.081 iops : min= 172, max= 285, avg=226.15, stdev=31.66, samples=20 00:23:56.081 lat (msec) : 50=22.78%, 100=66.59%, 250=10.62% 00:23:56.081 cpu : usr=32.09%, sys=1.09%, ctx=949, majf=0, minf=9 00:23:56.081 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:56.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.081 issued rwts: total=2278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.081 00:23:56.081 Run status group 0 (all jobs): 00:23:56.081 READ: bw=21.0MiB/s (22.1MB/s), 842KiB/s-933KiB/s (863kB/s-956kB/s), io=211MiB (222MB), run=10002-10049msec 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 bdev_null0 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 [2024-07-23 04:19:49.769179] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 bdev_null1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.649 { 00:23:56.649 "params": { 00:23:56.649 "name": "Nvme$subsystem", 00:23:56.649 "trtype": "$TEST_TRANSPORT", 00:23:56.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.649 "adrfam": "ipv4", 00:23:56.649 "trsvcid": "$NVMF_PORT", 00:23:56.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.649 "hdgst": ${hdgst:-false}, 00:23:56.649 "ddgst": ${ddgst:-false} 00:23:56.649 }, 00:23:56.649 "method": "bdev_nvme_attach_controller" 00:23:56.649 } 00:23:56.649 EOF 00:23:56.649 )") 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.649 { 00:23:56.649 "params": { 00:23:56.649 "name": "Nvme$subsystem", 00:23:56.649 "trtype": "$TEST_TRANSPORT", 00:23:56.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.649 "adrfam": "ipv4", 00:23:56.649 "trsvcid": "$NVMF_PORT", 00:23:56.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.649 "hdgst": ${hdgst:-false}, 00:23:56.649 "ddgst": ${ddgst:-false} 00:23:56.649 }, 00:23:56.649 "method": "bdev_nvme_attach_controller" 00:23:56.649 } 00:23:56.649 EOF 00:23:56.649 )") 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:56.649 "params": { 00:23:56.649 "name": "Nvme0", 00:23:56.649 "trtype": "tcp", 00:23:56.649 "traddr": "10.0.0.2", 00:23:56.649 "adrfam": "ipv4", 00:23:56.649 "trsvcid": "4420", 00:23:56.649 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:56.649 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:56.649 "hdgst": false, 00:23:56.649 "ddgst": false 00:23:56.649 }, 00:23:56.649 "method": "bdev_nvme_attach_controller" 00:23:56.649 },{ 00:23:56.649 "params": { 00:23:56.649 "name": "Nvme1", 00:23:56.649 "trtype": "tcp", 00:23:56.649 "traddr": "10.0.0.2", 00:23:56.649 "adrfam": "ipv4", 00:23:56.649 "trsvcid": "4420", 00:23:56.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.649 "hdgst": false, 00:23:56.649 "ddgst": false 00:23:56.649 }, 00:23:56.649 "method": "bdev_nvme_attach_controller" 00:23:56.649 }' 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:56.649 04:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.907 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:56.907 ... 00:23:56.907 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:56.907 ... 00:23:56.907 fio-3.35 00:23:56.907 Starting 4 threads 00:24:03.463 00:24:03.464 filename0: (groupid=0, jobs=1): err= 0: pid=99661: Tue Jul 23 04:19:55 2024 00:24:03.464 read: IOPS=2242, BW=17.5MiB/s (18.4MB/s)(87.6MiB/5003msec) 00:24:03.464 slat (nsec): min=6165, max=91771, avg=23459.40, stdev=11966.64 00:24:03.464 clat (usec): min=865, max=6423, avg=3497.35, stdev=891.39 00:24:03.464 lat (usec): min=883, max=6447, avg=3520.81, stdev=892.01 00:24:03.464 clat percentiles (usec): 00:24:03.464 | 1.00th=[ 1205], 5.00th=[ 1909], 10.00th=[ 2147], 20.00th=[ 2671], 00:24:03.464 | 30.00th=[ 3097], 40.00th=[ 3425], 50.00th=[ 3621], 60.00th=[ 3884], 00:24:03.464 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:24:03.464 | 99.00th=[ 5080], 99.50th=[ 5276], 99.90th=[ 5866], 99.95th=[ 6128], 00:24:03.464 | 99.99th=[ 6259] 00:24:03.464 bw ( KiB/s): min=16720, max=19856, per=24.82%, avg=17969.78, stdev=981.87, samples=9 00:24:03.464 iops : min= 2090, max= 2482, avg=2246.22, stdev=122.73, samples=9 00:24:03.464 lat (usec) : 1000=0.10% 00:24:03.464 lat (msec) : 2=7.23%, 4=58.20%, 10=34.47% 00:24:03.464 cpu : usr=94.04%, sys=5.08%, ctx=9, majf=0, minf=9 00:24:03.464 IO depths : 1=1.5%, 2=7.5%, 4=59.7%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.464 complete : 0=0.0%, 4=97.1%, 8=2.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.464 issued rwts: total=11219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:03.464 filename0: (groupid=0, jobs=1): err= 0: pid=99662: Tue Jul 23 04:19:55 2024 00:24:03.464 read: IOPS=2283, BW=17.8MiB/s (18.7MB/s)(89.2MiB/5001msec) 00:24:03.464 slat (nsec): min=6140, max=87710, avg=22587.25, stdev=11937.07 00:24:03.464 clat (usec): min=616, max=6167, avg=3436.73, stdev=927.37 00:24:03.464 lat (usec): min=628, max=6216, avg=3459.32, stdev=928.11 00:24:03.464 clat percentiles (usec): 00:24:03.464 | 1.00th=[ 1090], 5.00th=[ 1860], 10.00th=[ 2008], 20.00th=[ 2507], 00:24:03.464 | 30.00th=[ 3064], 40.00th=[ 3359], 50.00th=[ 3523], 60.00th=[ 3785], 00:24:03.464 | 70.00th=[ 4047], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4686], 00:24:03.464 | 99.00th=[ 5080], 99.50th=[ 5342], 99.90th=[ 5604], 99.95th=[ 5997], 00:24:03.464 | 99.99th=[ 6128] 00:24:03.464 bw ( KiB/s): min=17024, max=20576, per=25.31%, avg=18321.33, stdev=1057.94, samples=9 00:24:03.464 iops : min= 2128, max= 2572, avg=2290.11, stdev=132.29, samples=9 00:24:03.464 lat (usec) : 750=0.01%, 1000=0.39% 00:24:03.464 lat (msec) : 2=9.43%, 4=58.90%, 10=31.27% 00:24:03.464 cpu : usr=94.08%, sys=5.04%, ctx=7, majf=0, minf=0 00:24:03.464 IO depths : 1=1.3%, 2=7.0%, 4=60.0%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.464 complete : 0=0.0%, 4=97.3%, 8=2.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.464 issued rwts: total=11422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:03.464 filename1: (groupid=0, jobs=1): err= 0: pid=99663: Tue Jul 23 04:19:55 2024 00:24:03.464 read: IOPS=2284, BW=17.8MiB/s (18.7MB/s)(89.3MiB/5002msec) 00:24:03.464 slat (nsec): min=6144, max=94059, avg=19349.15, stdev=11540.23 00:24:03.464 clat (usec): min=520, max=6452, avg=3448.21, stdev=859.85 00:24:03.464 lat (usec): min=532, max=6494, avg=3467.56, stdev=859.74 00:24:03.464 clat percentiles (usec): 00:24:03.464 | 1.00th=[ 1401], 5.00th=[ 1942], 10.00th=[ 2212], 20.00th=[ 2573], 00:24:03.464 | 30.00th=[ 2999], 40.00th=[ 3326], 50.00th=[ 3556], 60.00th=[ 3818], 00:24:03.464 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4621], 00:24:03.464 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5800], 99.95th=[ 6063], 00:24:03.464 | 99.99th=[ 6325] 00:24:03.464 bw ( KiB/s): min=17040, max=20224, per=25.34%, avg=18341.89, stdev=1183.76, samples=9 00:24:03.464 iops : min= 2130, max= 2528, avg=2292.67, stdev=147.94, samples=9 00:24:03.464 lat (usec) : 750=0.03%, 1000=0.04% 00:24:03.464 lat (msec) : 2=6.29%, 4=62.42%, 10=31.22% 00:24:03.464 cpu : usr=94.52%, sys=4.52%, ctx=7, majf=0, minf=0 00:24:03.464 IO depths : 1=0.8%, 2=6.4%, 4=60.2%, 8=32.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.464 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.464 issued rwts: total=11425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:03.464 filename1: (groupid=0, jobs=1): err= 0: pid=99664: Tue Jul 23 04:19:55 2024 00:24:03.464 read: IOPS=2240, BW=17.5MiB/s (18.4MB/s)(87.5MiB/5001msec) 00:24:03.464 slat (usec): min=6, max=395, avg=22.63, stdev=12.38 00:24:03.464 clat (usec): min=335, max=6217, avg=3503.31, stdev=949.40 00:24:03.464 lat (usec): min=359, max=6225, avg=3525.94, stdev=950.02 00:24:03.464 clat percentiles (usec): 00:24:03.464 | 1.00th=[ 1057], 5.00th=[ 1745], 10.00th=[ 2089], 20.00th=[ 2769], 00:24:03.464 | 30.00th=[ 3163], 40.00th=[ 3425], 50.00th=[ 3621], 60.00th=[ 3949], 00:24:03.464 | 70.00th=[ 4113], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4686], 00:24:03.464 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5538], 99.95th=[ 5604], 00:24:03.464 | 99.99th=[ 6063] 00:24:03.464 bw ( KiB/s): min=15888, max=20992, per=24.75%, avg=17918.22, stdev=1586.17, samples=9 00:24:03.464 iops : min= 1986, max= 2624, avg=2239.78, stdev=198.27, samples=9 00:24:03.464 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.67% 00:24:03.464 lat (msec) : 2=7.86%, 4=54.45%, 10=36.98% 00:24:03.464 cpu : usr=93.70%, sys=5.08%, ctx=97, majf=0, minf=9 00:24:03.464 IO depths : 1=1.0%, 2=8.3%, 4=59.4%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.464 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.464 issued rwts: total=11203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:03.464 00:24:03.464 Run status group 0 (all jobs): 00:24:03.464 READ: bw=70.7MiB/s (74.1MB/s), 17.5MiB/s-17.8MiB/s (18.4MB/s-18.7MB/s), io=354MiB (371MB), run=5001-5003msec 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.464 ************************************ 00:24:03.464 END TEST fio_dif_rand_params 00:24:03.464 ************************************ 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.464 00:24:03.464 real 0m23.310s 00:24:03.464 user 2m5.276s 00:24:03.464 sys 0m6.326s 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:03.464 04:19:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.464 04:19:55 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:24:03.464 04:19:55 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:03.464 04:19:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:03.464 04:19:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.464 04:19:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:03.464 ************************************ 00:24:03.464 START TEST fio_dif_digest 00:24:03.464 ************************************ 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:03.464 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.465 bdev_null0 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.465 [2024-07-23 04:19:55.853734] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:03.465 { 00:24:03.465 "params": { 00:24:03.465 "name": "Nvme$subsystem", 00:24:03.465 "trtype": "$TEST_TRANSPORT", 00:24:03.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.465 "adrfam": "ipv4", 00:24:03.465 "trsvcid": "$NVMF_PORT", 00:24:03.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.465 "hdgst": ${hdgst:-false}, 00:24:03.465 "ddgst": ${ddgst:-false} 00:24:03.465 }, 00:24:03.465 "method": "bdev_nvme_attach_controller" 00:24:03.465 } 00:24:03.465 EOF 00:24:03.465 )") 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:03.465 "params": { 00:24:03.465 "name": "Nvme0", 00:24:03.465 "trtype": "tcp", 00:24:03.465 "traddr": "10.0.0.2", 00:24:03.465 "adrfam": "ipv4", 00:24:03.465 "trsvcid": "4420", 00:24:03.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:03.465 "hdgst": true, 00:24:03.465 "ddgst": true 00:24:03.465 }, 00:24:03.465 "method": "bdev_nvme_attach_controller" 00:24:03.465 }' 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:03.465 04:19:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.465 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:03.465 ... 00:24:03.465 fio-3.35 00:24:03.465 Starting 3 threads 00:24:13.435 00:24:13.435 filename0: (groupid=0, jobs=1): err= 0: pid=99762: Tue Jul 23 04:20:06 2024 00:24:13.435 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(327MiB/10012msec) 00:24:13.435 slat (usec): min=6, max=100, avg=18.55, stdev=13.25 00:24:13.435 clat (usec): min=11014, max=13589, avg=11424.66, stdev=393.77 00:24:13.435 lat (usec): min=11022, max=13603, avg=11443.21, stdev=395.10 00:24:13.435 clat percentiles (usec): 00:24:13.435 | 1.00th=[11076], 5.00th=[11076], 10.00th=[11207], 20.00th=[11207], 00:24:13.435 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:24:13.435 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[12387], 00:24:13.435 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13566], 99.95th=[13566], 00:24:13.435 | 99.99th=[13566] 00:24:13.435 bw ( KiB/s): min=33024, max=33792, per=33.34%, avg=33488.10, stdev=382.13, samples=20 00:24:13.435 iops : min= 258, max= 264, avg=261.60, stdev= 3.02, samples=20 00:24:13.435 lat (msec) : 20=100.00% 00:24:13.435 cpu : usr=95.86%, sys=3.60%, ctx=27, majf=0, minf=0 00:24:13.435 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.435 issued rwts: total=2619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.435 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:13.435 filename0: (groupid=0, jobs=1): err= 0: pid=99763: Tue Jul 23 04:20:06 2024 00:24:13.435 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(327MiB/10010msec) 00:24:13.435 slat (nsec): min=6623, max=74431, avg=19679.91, stdev=12962.88 00:24:13.435 clat (usec): min=8889, max=14593, avg=11419.27, stdev=417.84 00:24:13.435 lat (usec): min=8896, max=14620, avg=11438.95, stdev=419.40 00:24:13.435 clat percentiles (usec): 00:24:13.435 | 1.00th=[11076], 5.00th=[11076], 10.00th=[11076], 20.00th=[11207], 00:24:13.435 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:24:13.435 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[12387], 00:24:13.435 | 99.00th=[13042], 99.50th=[13173], 99.90th=[14615], 99.95th=[14615], 00:24:13.435 | 99.99th=[14615] 00:24:13.435 bw ( KiB/s): min=33024, max=33792, per=33.36%, avg=33509.05, stdev=380.62, samples=19 00:24:13.435 iops : min= 258, max= 264, avg=261.79, stdev= 2.97, samples=19 00:24:13.435 lat (msec) : 10=0.11%, 20=99.89% 00:24:13.435 cpu : usr=95.63%, sys=3.81%, ctx=13, majf=0, minf=9 00:24:13.435 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.435 issued rwts: total=2619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.435 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:13.435 filename0: (groupid=0, jobs=1): err= 0: pid=99764: Tue Jul 23 04:20:06 2024 00:24:13.435 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(327MiB/10012msec) 00:24:13.435 slat (nsec): min=6272, max=85304, avg=17827.95, stdev=10624.12 00:24:13.435 clat (usec): min=10236, max=13543, avg=11426.81, stdev=399.77 00:24:13.435 lat (usec): min=10244, max=13567, avg=11444.64, stdev=400.80 00:24:13.435 clat percentiles (usec): 00:24:13.435 | 1.00th=[11076], 5.00th=[11076], 10.00th=[11076], 20.00th=[11207], 00:24:13.435 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:24:13.435 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[12387], 00:24:13.435 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13435], 99.95th=[13566], 00:24:13.435 | 99.99th=[13566] 00:24:13.435 bw ( KiB/s): min=33024, max=33792, per=33.33%, avg=33484.80, stdev=386.02, samples=20 00:24:13.435 iops : min= 258, max= 264, avg=261.60, stdev= 3.02, samples=20 00:24:13.435 lat (msec) : 20=100.00% 00:24:13.435 cpu : usr=94.09%, sys=5.31%, ctx=14, majf=0, minf=9 00:24:13.435 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.435 issued rwts: total=2619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.435 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:13.435 00:24:13.435 Run status group 0 (all jobs): 00:24:13.435 READ: bw=98.1MiB/s (103MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=982MiB (1030MB), run=10010-10012msec 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:13.693 ************************************ 00:24:13.693 END TEST fio_dif_digest 00:24:13.693 ************************************ 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.693 00:24:13.693 real 0m11.087s 00:24:13.693 user 0m29.283s 00:24:13.693 sys 0m1.574s 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:13.693 04:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:13.693 04:20:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:24:13.693 04:20:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:13.693 04:20:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:13.693 04:20:06 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.693 04:20:06 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:13.693 04:20:07 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.693 04:20:07 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:13.693 04:20:07 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.693 04:20:07 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.693 rmmod nvme_tcp 00:24:13.951 rmmod nvme_fabrics 00:24:13.951 rmmod nvme_keyring 00:24:13.951 04:20:07 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.951 04:20:07 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:13.951 04:20:07 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:13.951 04:20:07 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 99005 ']' 00:24:13.951 04:20:07 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 99005 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 99005 ']' 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 99005 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99005 00:24:13.951 killing process with pid 99005 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99005' 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@967 -- # kill 99005 00:24:13.951 04:20:07 nvmf_dif -- common/autotest_common.sh@972 -- # wait 99005 00:24:13.951 04:20:07 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:13.951 04:20:07 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:14.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:14.518 Waiting for block devices as requested 00:24:14.518 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:14.518 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:14.518 04:20:07 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.518 04:20:07 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.518 04:20:07 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.518 04:20:07 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.518 04:20:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.518 04:20:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:14.518 04:20:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.777 04:20:07 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:14.777 00:24:14.777 real 0m59.452s 00:24:14.777 user 3m49.869s 00:24:14.777 sys 0m16.809s 00:24:14.777 04:20:07 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.777 ************************************ 00:24:14.777 END TEST nvmf_dif 00:24:14.777 ************************************ 00:24:14.777 04:20:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:14.777 04:20:07 -- common/autotest_common.sh@1142 -- # return 0 00:24:14.777 04:20:07 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:14.777 04:20:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:14.777 04:20:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.777 04:20:07 -- common/autotest_common.sh@10 -- # set +x 00:24:14.777 ************************************ 00:24:14.777 START TEST nvmf_abort_qd_sizes 00:24:14.777 ************************************ 00:24:14.777 04:20:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:14.777 * Looking for test storage... 00:24:14.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:14.777 Cannot find device "nvmf_tgt_br" 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:14.777 Cannot find device "nvmf_tgt_br2" 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:14.777 Cannot find device "nvmf_tgt_br" 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:14.777 Cannot find device "nvmf_tgt_br2" 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:24:14.777 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:15.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:15.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:15.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:24:15.036 00:24:15.036 --- 10.0.0.2 ping statistics --- 00:24:15.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.036 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:15.036 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:15.036 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:24:15.036 00:24:15.036 --- 10.0.0.3 ping statistics --- 00:24:15.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.036 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:24:15.036 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:15.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:24:15.294 00:24:15.294 --- 10.0.0.1 ping statistics --- 00:24:15.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.294 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:24:15.294 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.294 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:24:15.294 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:15.294 04:20:08 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:15.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:15.859 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:15.859 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=100359 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 100359 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 100359 ']' 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.117 04:20:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:16.117 [2024-07-23 04:20:09.327636] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:24:16.117 [2024-07-23 04:20:09.327946] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.117 [2024-07-23 04:20:09.452639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:16.374 [2024-07-23 04:20:09.471488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.374 [2024-07-23 04:20:09.546451] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.374 [2024-07-23 04:20:09.546767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.374 [2024-07-23 04:20:09.547067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.374 [2024-07-23 04:20:09.547248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.374 [2024-07-23 04:20:09.547294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.374 [2024-07-23 04:20:09.547599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.375 [2024-07-23 04:20:09.548043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.375 [2024-07-23 04:20:09.548047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.375 [2024-07-23 04:20:09.548213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.375 [2024-07-23 04:20:09.605784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:17.309 04:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:17.309 ************************************ 00:24:17.309 START TEST spdk_target_abort 00:24:17.309 ************************************ 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:17.309 spdk_targetn1 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:17.309 [2024-07-23 04:20:10.486324] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.309 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:17.310 [2024-07-23 04:20:10.514496] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:17.310 04:20:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:20.594 Initializing NVMe Controllers 00:24:20.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:20.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:20.594 Initialization complete. Launching workers. 00:24:20.594 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9471, failed: 0 00:24:20.594 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1060, failed to submit 8411 00:24:20.594 success 780, unsuccess 280, failed 0 00:24:20.594 04:20:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:20.594 04:20:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:23.875 Initializing NVMe Controllers 00:24:23.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:23.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:23.875 Initialization complete. Launching workers. 00:24:23.875 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:24:23.875 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1172, failed to submit 7780 00:24:23.875 success 366, unsuccess 806, failed 0 00:24:23.875 04:20:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:23.875 04:20:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:27.159 Initializing NVMe Controllers 00:24:27.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:27.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:27.159 Initialization complete. Launching workers. 00:24:27.159 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30114, failed: 0 00:24:27.159 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2318, failed to submit 27796 00:24:27.159 success 492, unsuccess 1826, failed 0 00:24:27.159 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:27.159 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.159 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.159 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.159 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:27.159 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.159 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 100359 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 100359 ']' 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 100359 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100359 00:24:27.418 killing process with pid 100359 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100359' 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 100359 00:24:27.418 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 100359 00:24:27.677 ************************************ 00:24:27.677 END TEST spdk_target_abort 00:24:27.677 ************************************ 00:24:27.677 00:24:27.677 real 0m10.424s 00:24:27.677 user 0m42.310s 00:24:27.677 sys 0m2.069s 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.677 04:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:27.677 04:20:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:27.677 04:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:27.677 04:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.677 04:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:27.677 ************************************ 00:24:27.677 START TEST kernel_target_abort 00:24:27.677 ************************************ 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:27.677 04:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:27.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:27.936 Waiting for block devices as requested 00:24:28.194 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.194 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:28.194 No valid GPT data, bailing 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:28.194 No valid GPT data, bailing 00:24:28.194 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:28.453 No valid GPT data, bailing 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:28.453 No valid GPT data, bailing 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:28.453 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 --hostid=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 -a 10.0.0.1 -t tcp -s 4420 00:24:28.454 00:24:28.454 Discovery Log Number of Records 2, Generation counter 2 00:24:28.454 =====Discovery Log Entry 0====== 00:24:28.454 trtype: tcp 00:24:28.454 adrfam: ipv4 00:24:28.454 subtype: current discovery subsystem 00:24:28.454 treq: not specified, sq flow control disable supported 00:24:28.454 portid: 1 00:24:28.454 trsvcid: 4420 00:24:28.454 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:28.454 traddr: 10.0.0.1 00:24:28.454 eflags: none 00:24:28.454 sectype: none 00:24:28.454 =====Discovery Log Entry 1====== 00:24:28.454 trtype: tcp 00:24:28.454 adrfam: ipv4 00:24:28.454 subtype: nvme subsystem 00:24:28.454 treq: not specified, sq flow control disable supported 00:24:28.454 portid: 1 00:24:28.454 trsvcid: 4420 00:24:28.454 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:28.454 traddr: 10.0.0.1 00:24:28.454 eflags: none 00:24:28.454 sectype: none 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:28.454 04:20:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:31.740 Initializing NVMe Controllers 00:24:31.740 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:31.740 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:31.740 Initialization complete. Launching workers. 00:24:31.740 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32857, failed: 0 00:24:31.740 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32857, failed to submit 0 00:24:31.740 success 0, unsuccess 32857, failed 0 00:24:31.740 04:20:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:31.740 04:20:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:35.024 Initializing NVMe Controllers 00:24:35.024 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:35.024 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:35.024 Initialization complete. Launching workers. 00:24:35.024 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65646, failed: 0 00:24:35.024 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27157, failed to submit 38489 00:24:35.024 success 0, unsuccess 27157, failed 0 00:24:35.024 04:20:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:35.024 04:20:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:38.307 Initializing NVMe Controllers 00:24:38.307 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:38.307 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:38.307 Initialization complete. Launching workers. 00:24:38.307 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70988, failed: 0 00:24:38.307 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17694, failed to submit 53294 00:24:38.307 success 0, unsuccess 17694, failed 0 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:38.307 04:20:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:38.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:39.468 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:39.726 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:39.726 00:24:39.726 real 0m12.027s 00:24:39.726 user 0m5.385s 00:24:39.726 sys 0m3.896s 00:24:39.726 04:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:39.726 04:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.726 ************************************ 00:24:39.726 END TEST kernel_target_abort 00:24:39.726 ************************************ 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.726 04:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.726 rmmod nvme_tcp 00:24:39.726 rmmod nvme_fabrics 00:24:39.726 rmmod nvme_keyring 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 100359 ']' 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 100359 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 100359 ']' 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 100359 00:24:39.726 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (100359) - No such process 00:24:39.726 Process with pid 100359 is not found 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 100359 is not found' 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:39.726 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:40.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:40.292 Waiting for block devices as requested 00:24:40.292 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:40.292 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:40.292 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:40.292 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:40.292 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.292 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:40.292 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.292 04:20:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:40.292 04:20:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.551 04:20:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:40.551 00:24:40.551 real 0m25.721s 00:24:40.551 user 0m48.833s 00:24:40.551 sys 0m7.277s 00:24:40.551 04:20:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.551 04:20:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:40.551 ************************************ 00:24:40.551 END TEST nvmf_abort_qd_sizes 00:24:40.551 ************************************ 00:24:40.551 04:20:33 -- common/autotest_common.sh@1142 -- # return 0 00:24:40.551 04:20:33 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:40.551 04:20:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:40.551 04:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.551 04:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:40.551 ************************************ 00:24:40.551 START TEST keyring_file 00:24:40.551 ************************************ 00:24:40.551 04:20:33 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:40.551 * Looking for test storage... 00:24:40.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:40.551 04:20:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.551 04:20:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.551 04:20:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.551 04:20:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.551 04:20:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.551 04:20:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.551 04:20:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:40.551 04:20:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VfyxjSz8Lo 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VfyxjSz8Lo 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VfyxjSz8Lo 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VfyxjSz8Lo 00:24:40.551 04:20:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.P72vfmHMfu 00:24:40.551 04:20:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:40.551 04:20:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:40.810 04:20:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.P72vfmHMfu 00:24:40.810 04:20:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.P72vfmHMfu 00:24:40.810 04:20:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.P72vfmHMfu 00:24:40.810 04:20:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=101220 00:24:40.810 04:20:33 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.810 04:20:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 101220 00:24:40.810 04:20:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 101220 ']' 00:24:40.810 04:20:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.810 04:20:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.810 04:20:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.810 04:20:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.810 04:20:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:40.810 [2024-07-23 04:20:33.967120] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:24:40.810 [2024-07-23 04:20:33.967213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101220 ] 00:24:40.810 [2024-07-23 04:20:34.088682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:40.810 [2024-07-23 04:20:34.106142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.068 [2024-07-23 04:20:34.182361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.068 [2024-07-23 04:20:34.238468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:41.635 04:20:34 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.635 04:20:34 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:41.635 04:20:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:41.635 04:20:34 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.635 04:20:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:41.635 [2024-07-23 04:20:34.951792] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.635 null0 00:24:41.893 [2024-07-23 04:20:34.983770] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.893 [2024-07-23 04:20:34.983982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:41.893 [2024-07-23 04:20:34.991755] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.893 04:20:34 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:41.893 04:20:34 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:41.893 [2024-07-23 04:20:35.003753] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:41.893 request: 00:24:41.893 { 00:24:41.893 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.893 "secure_channel": false, 00:24:41.893 "listen_address": { 00:24:41.893 "trtype": "tcp", 00:24:41.893 "traddr": "127.0.0.1", 00:24:41.893 "trsvcid": "4420" 00:24:41.893 }, 00:24:41.893 "method": "nvmf_subsystem_add_listener", 00:24:41.893 "req_id": 1 00:24:41.893 } 00:24:41.893 Got JSON-RPC error response 00:24:41.893 response: 00:24:41.893 { 00:24:41.893 "code": -32602, 00:24:41.893 "message": "Invalid parameters" 00:24:41.893 } 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:41.893 04:20:35 keyring_file -- keyring/file.sh@46 -- # bperfpid=101237 00:24:41.893 04:20:35 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:41.893 04:20:35 keyring_file -- keyring/file.sh@48 -- # waitforlisten 101237 /var/tmp/bperf.sock 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 101237 ']' 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.893 04:20:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:41.893 [2024-07-23 04:20:35.064026] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:24:41.893 [2024-07-23 04:20:35.064122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101237 ] 00:24:41.893 [2024-07-23 04:20:35.185144] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:41.893 [2024-07-23 04:20:35.206093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.152 [2024-07-23 04:20:35.270327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.152 [2024-07-23 04:20:35.326071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:42.719 04:20:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.719 04:20:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:42.719 04:20:35 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VfyxjSz8Lo 00:24:42.719 04:20:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VfyxjSz8Lo 00:24:42.978 04:20:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P72vfmHMfu 00:24:42.978 04:20:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P72vfmHMfu 00:24:43.236 04:20:36 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:43.237 04:20:36 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:43.237 04:20:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:43.237 04:20:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.237 04:20:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.495 04:20:36 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.VfyxjSz8Lo == \/\t\m\p\/\t\m\p\.\V\f\y\x\j\S\z\8\L\o ]] 00:24:43.495 04:20:36 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:43.495 04:20:36 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:43.495 04:20:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.495 04:20:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.495 04:20:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:43.495 04:20:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.P72vfmHMfu == \/\t\m\p\/\t\m\p\.\P\7\2\v\f\m\H\M\f\u ]] 00:24:43.495 04:20:36 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:43.495 04:20:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:43.495 04:20:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:43.495 04:20:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.496 04:20:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.496 04:20:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:43.754 04:20:37 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:43.754 04:20:37 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:43.754 04:20:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:43.754 04:20:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:43.754 04:20:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:43.754 04:20:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.754 04:20:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:44.013 04:20:37 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:44.013 04:20:37 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:44.013 04:20:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:44.272 [2024-07-23 04:20:37.479596] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.272 nvme0n1 00:24:44.272 04:20:37 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:44.272 04:20:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:44.272 04:20:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:44.272 04:20:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:44.272 04:20:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:44.272 04:20:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:44.530 04:20:37 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:44.530 04:20:37 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:44.530 04:20:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:44.530 04:20:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:44.530 04:20:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:44.530 04:20:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:44.530 04:20:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:44.789 04:20:38 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:44.789 04:20:38 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:44.789 Running I/O for 1 seconds... 00:24:46.164 00:24:46.164 Latency(us) 00:24:46.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.164 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:46.164 nvme0n1 : 1.01 13876.54 54.21 0.00 0.00 9193.94 3500.22 14239.19 00:24:46.164 =================================================================================================================== 00:24:46.165 Total : 13876.54 54.21 0.00 0.00 9193.94 3500.22 14239.19 00:24:46.165 0 00:24:46.165 04:20:39 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:46.165 04:20:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:46.165 04:20:39 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:46.165 04:20:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:46.165 04:20:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:46.165 04:20:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:46.165 04:20:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.165 04:20:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.423 04:20:39 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:46.423 04:20:39 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:46.423 04:20:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:46.423 04:20:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:46.423 04:20:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.423 04:20:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.423 04:20:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:46.682 04:20:39 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:46.682 04:20:39 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:46.682 04:20:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:46.682 04:20:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:46.682 04:20:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:46.682 04:20:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.682 04:20:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:46.682 04:20:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.682 04:20:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:46.682 04:20:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:46.941 [2024-07-23 04:20:40.128374] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:46.941 [2024-07-23 04:20:40.128655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2147c80 (107): Transport endpoint is not connected 00:24:46.941 [2024-07-23 04:20:40.129647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2147c80 (9): Bad file descriptor 00:24:46.941 [2024-07-23 04:20:40.130644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:46.941 [2024-07-23 04:20:40.130662] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:46.941 [2024-07-23 04:20:40.130689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:46.941 request: 00:24:46.941 { 00:24:46.941 "name": "nvme0", 00:24:46.941 "trtype": "tcp", 00:24:46.941 "traddr": "127.0.0.1", 00:24:46.941 "adrfam": "ipv4", 00:24:46.941 "trsvcid": "4420", 00:24:46.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:46.941 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:46.941 "prchk_reftag": false, 00:24:46.941 "prchk_guard": false, 00:24:46.941 "hdgst": false, 00:24:46.941 "ddgst": false, 00:24:46.941 "psk": "key1", 00:24:46.941 "method": "bdev_nvme_attach_controller", 00:24:46.941 "req_id": 1 00:24:46.941 } 00:24:46.941 Got JSON-RPC error response 00:24:46.941 response: 00:24:46.941 { 00:24:46.941 "code": -5, 00:24:46.941 "message": "Input/output error" 00:24:46.941 } 00:24:46.941 04:20:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:46.941 04:20:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:46.941 04:20:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:46.941 04:20:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:46.941 04:20:40 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:46.941 04:20:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:46.941 04:20:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:46.941 04:20:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.941 04:20:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:46.942 04:20:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.200 04:20:40 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:47.200 04:20:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:47.200 04:20:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.200 04:20:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:47.200 04:20:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.200 04:20:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.200 04:20:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:47.482 04:20:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:47.482 04:20:40 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:47.482 04:20:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:47.741 04:20:40 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:47.741 04:20:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:47.999 04:20:41 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:47.999 04:20:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.999 04:20:41 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:47.999 04:20:41 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:47.999 04:20:41 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.VfyxjSz8Lo 00:24:47.999 04:20:41 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VfyxjSz8Lo 00:24:47.999 04:20:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:47.999 04:20:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VfyxjSz8Lo 00:24:47.999 04:20:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:47.999 04:20:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:47.999 04:20:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:47.999 04:20:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:47.999 04:20:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VfyxjSz8Lo 00:24:47.999 04:20:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VfyxjSz8Lo 00:24:48.256 [2024-07-23 04:20:41.515162] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VfyxjSz8Lo': 0100660 00:24:48.256 [2024-07-23 04:20:41.515213] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:48.256 request: 00:24:48.256 { 00:24:48.256 "name": "key0", 00:24:48.256 "path": "/tmp/tmp.VfyxjSz8Lo", 00:24:48.256 "method": "keyring_file_add_key", 00:24:48.256 "req_id": 1 00:24:48.256 } 00:24:48.256 Got JSON-RPC error response 00:24:48.256 response: 00:24:48.256 { 00:24:48.257 "code": -1, 00:24:48.257 "message": "Operation not permitted" 00:24:48.257 } 00:24:48.257 04:20:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:48.257 04:20:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:48.257 04:20:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:48.257 04:20:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:48.257 04:20:41 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.VfyxjSz8Lo 00:24:48.257 04:20:41 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VfyxjSz8Lo 00:24:48.257 04:20:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VfyxjSz8Lo 00:24:48.515 04:20:41 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.VfyxjSz8Lo 00:24:48.515 04:20:41 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:48.515 04:20:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:48.515 04:20:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:48.515 04:20:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:48.515 04:20:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:48.515 04:20:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:48.773 04:20:41 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:48.773 04:20:41 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:48.773 04:20:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:48.773 04:20:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:48.773 04:20:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:48.773 04:20:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.773 04:20:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:48.773 04:20:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.773 04:20:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:48.773 04:20:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:49.031 [2024-07-23 04:20:42.127341] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VfyxjSz8Lo': No such file or directory 00:24:49.031 [2024-07-23 04:20:42.127396] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:49.031 [2024-07-23 04:20:42.127435] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:49.031 [2024-07-23 04:20:42.127442] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:49.031 [2024-07-23 04:20:42.127450] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:49.031 request: 00:24:49.031 { 00:24:49.031 "name": "nvme0", 00:24:49.031 "trtype": "tcp", 00:24:49.031 "traddr": "127.0.0.1", 00:24:49.031 "adrfam": "ipv4", 00:24:49.031 "trsvcid": "4420", 00:24:49.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:49.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:49.031 "prchk_reftag": false, 00:24:49.031 "prchk_guard": false, 00:24:49.031 "hdgst": false, 00:24:49.031 "ddgst": false, 00:24:49.031 "psk": "key0", 00:24:49.031 "method": "bdev_nvme_attach_controller", 00:24:49.031 "req_id": 1 00:24:49.031 } 00:24:49.031 Got JSON-RPC error response 00:24:49.031 response: 00:24:49.031 { 00:24:49.031 "code": -19, 00:24:49.031 "message": "No such device" 00:24:49.031 } 00:24:49.031 04:20:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:49.031 04:20:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:49.031 04:20:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:49.031 04:20:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:49.031 04:20:42 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:49.031 04:20:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:49.289 04:20:42 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:49.289 04:20:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:49.289 04:20:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:49.289 04:20:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:49.289 04:20:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:49.289 04:20:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:49.289 04:20:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7bCvRYNffZ 00:24:49.289 04:20:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:49.290 04:20:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:49.290 04:20:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:49.290 04:20:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:49.290 04:20:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:49.290 04:20:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:49.290 04:20:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:49.290 04:20:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7bCvRYNffZ 00:24:49.290 04:20:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7bCvRYNffZ 00:24:49.290 04:20:42 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.7bCvRYNffZ 00:24:49.290 04:20:42 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7bCvRYNffZ 00:24:49.290 04:20:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7bCvRYNffZ 00:24:49.548 04:20:42 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:49.548 04:20:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:49.807 nvme0n1 00:24:49.807 04:20:43 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:49.807 04:20:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:49.807 04:20:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:49.807 04:20:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:49.807 04:20:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:49.807 04:20:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:50.066 04:20:43 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:50.066 04:20:43 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:50.066 04:20:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:50.324 04:20:43 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:50.324 04:20:43 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:50.324 04:20:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:50.324 04:20:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.324 04:20:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.582 04:20:43 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:50.582 04:20:43 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:50.582 04:20:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:50.582 04:20:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.582 04:20:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:50.582 04:20:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.582 04:20:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:50.841 04:20:43 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:50.841 04:20:43 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:50.841 04:20:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:50.841 04:20:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:50.841 04:20:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.841 04:20:44 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:51.099 04:20:44 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:51.099 04:20:44 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7bCvRYNffZ 00:24:51.099 04:20:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7bCvRYNffZ 00:24:51.358 04:20:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P72vfmHMfu 00:24:51.358 04:20:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P72vfmHMfu 00:24:51.615 04:20:44 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:51.615 04:20:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:51.873 nvme0n1 00:24:51.873 04:20:45 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:51.873 04:20:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:52.131 04:20:45 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:52.131 "subsystems": [ 00:24:52.131 { 00:24:52.131 "subsystem": "keyring", 00:24:52.131 "config": [ 00:24:52.131 { 00:24:52.131 "method": "keyring_file_add_key", 00:24:52.131 "params": { 00:24:52.131 "name": "key0", 00:24:52.131 "path": "/tmp/tmp.7bCvRYNffZ" 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "keyring_file_add_key", 00:24:52.131 "params": { 00:24:52.131 "name": "key1", 00:24:52.131 "path": "/tmp/tmp.P72vfmHMfu" 00:24:52.131 } 00:24:52.131 } 00:24:52.131 ] 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "subsystem": "iobuf", 00:24:52.131 "config": [ 00:24:52.131 { 00:24:52.131 "method": "iobuf_set_options", 00:24:52.131 "params": { 00:24:52.131 "small_pool_count": 8192, 00:24:52.131 "large_pool_count": 1024, 00:24:52.131 "small_bufsize": 8192, 00:24:52.131 "large_bufsize": 135168 00:24:52.131 } 00:24:52.131 } 00:24:52.131 ] 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "subsystem": "sock", 00:24:52.131 "config": [ 00:24:52.131 { 00:24:52.131 "method": "sock_set_default_impl", 00:24:52.131 "params": { 00:24:52.131 "impl_name": "uring" 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "sock_impl_set_options", 00:24:52.131 "params": { 00:24:52.131 "impl_name": "ssl", 00:24:52.131 "recv_buf_size": 4096, 00:24:52.131 "send_buf_size": 4096, 00:24:52.131 "enable_recv_pipe": true, 00:24:52.131 "enable_quickack": false, 00:24:52.131 "enable_placement_id": 0, 00:24:52.131 "enable_zerocopy_send_server": true, 00:24:52.131 "enable_zerocopy_send_client": false, 00:24:52.131 "zerocopy_threshold": 0, 00:24:52.131 "tls_version": 0, 00:24:52.131 "enable_ktls": false 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "sock_impl_set_options", 00:24:52.131 "params": { 00:24:52.131 "impl_name": "posix", 00:24:52.131 "recv_buf_size": 2097152, 00:24:52.131 "send_buf_size": 2097152, 00:24:52.131 "enable_recv_pipe": true, 00:24:52.131 "enable_quickack": false, 00:24:52.131 "enable_placement_id": 0, 00:24:52.131 "enable_zerocopy_send_server": true, 00:24:52.131 "enable_zerocopy_send_client": false, 00:24:52.131 "zerocopy_threshold": 0, 00:24:52.131 "tls_version": 0, 00:24:52.131 "enable_ktls": false 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "sock_impl_set_options", 00:24:52.131 "params": { 00:24:52.131 "impl_name": "uring", 00:24:52.131 "recv_buf_size": 2097152, 00:24:52.131 "send_buf_size": 2097152, 00:24:52.131 "enable_recv_pipe": true, 00:24:52.131 "enable_quickack": false, 00:24:52.131 "enable_placement_id": 0, 00:24:52.131 "enable_zerocopy_send_server": false, 00:24:52.131 "enable_zerocopy_send_client": false, 00:24:52.131 "zerocopy_threshold": 0, 00:24:52.131 "tls_version": 0, 00:24:52.131 "enable_ktls": false 00:24:52.131 } 00:24:52.131 } 00:24:52.131 ] 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "subsystem": "vmd", 00:24:52.131 "config": [] 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "subsystem": "accel", 00:24:52.131 "config": [ 00:24:52.131 { 00:24:52.131 "method": "accel_set_options", 00:24:52.131 "params": { 00:24:52.131 "small_cache_size": 128, 00:24:52.131 "large_cache_size": 16, 00:24:52.131 "task_count": 2048, 00:24:52.131 "sequence_count": 2048, 00:24:52.131 "buf_count": 2048 00:24:52.131 } 00:24:52.131 } 00:24:52.131 ] 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "subsystem": "bdev", 00:24:52.131 "config": [ 00:24:52.131 { 00:24:52.131 "method": "bdev_set_options", 00:24:52.131 "params": { 00:24:52.131 "bdev_io_pool_size": 65535, 00:24:52.131 "bdev_io_cache_size": 256, 00:24:52.131 "bdev_auto_examine": true, 00:24:52.131 "iobuf_small_cache_size": 128, 00:24:52.131 "iobuf_large_cache_size": 16 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "bdev_raid_set_options", 00:24:52.131 "params": { 00:24:52.131 "process_window_size_kb": 1024, 00:24:52.131 "process_max_bandwidth_mb_sec": 0 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "bdev_iscsi_set_options", 00:24:52.131 "params": { 00:24:52.131 "timeout_sec": 30 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "bdev_nvme_set_options", 00:24:52.131 "params": { 00:24:52.131 "action_on_timeout": "none", 00:24:52.131 "timeout_us": 0, 00:24:52.131 "timeout_admin_us": 0, 00:24:52.131 "keep_alive_timeout_ms": 10000, 00:24:52.131 "arbitration_burst": 0, 00:24:52.131 "low_priority_weight": 0, 00:24:52.131 "medium_priority_weight": 0, 00:24:52.131 "high_priority_weight": 0, 00:24:52.131 "nvme_adminq_poll_period_us": 10000, 00:24:52.131 "nvme_ioq_poll_period_us": 0, 00:24:52.131 "io_queue_requests": 512, 00:24:52.131 "delay_cmd_submit": true, 00:24:52.131 "transport_retry_count": 4, 00:24:52.131 "bdev_retry_count": 3, 00:24:52.131 "transport_ack_timeout": 0, 00:24:52.131 "ctrlr_loss_timeout_sec": 0, 00:24:52.131 "reconnect_delay_sec": 0, 00:24:52.131 "fast_io_fail_timeout_sec": 0, 00:24:52.131 "disable_auto_failback": false, 00:24:52.131 "generate_uuids": false, 00:24:52.131 "transport_tos": 0, 00:24:52.131 "nvme_error_stat": false, 00:24:52.131 "rdma_srq_size": 0, 00:24:52.131 "io_path_stat": false, 00:24:52.131 "allow_accel_sequence": false, 00:24:52.131 "rdma_max_cq_size": 0, 00:24:52.131 "rdma_cm_event_timeout_ms": 0, 00:24:52.131 "dhchap_digests": [ 00:24:52.131 "sha256", 00:24:52.131 "sha384", 00:24:52.131 "sha512" 00:24:52.131 ], 00:24:52.131 "dhchap_dhgroups": [ 00:24:52.131 "null", 00:24:52.131 "ffdhe2048", 00:24:52.131 "ffdhe3072", 00:24:52.131 "ffdhe4096", 00:24:52.131 "ffdhe6144", 00:24:52.131 "ffdhe8192" 00:24:52.131 ] 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "bdev_nvme_attach_controller", 00:24:52.131 "params": { 00:24:52.131 "name": "nvme0", 00:24:52.131 "trtype": "TCP", 00:24:52.131 "adrfam": "IPv4", 00:24:52.131 "traddr": "127.0.0.1", 00:24:52.131 "trsvcid": "4420", 00:24:52.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:52.131 "prchk_reftag": false, 00:24:52.131 "prchk_guard": false, 00:24:52.131 "ctrlr_loss_timeout_sec": 0, 00:24:52.131 "reconnect_delay_sec": 0, 00:24:52.131 "fast_io_fail_timeout_sec": 0, 00:24:52.131 "psk": "key0", 00:24:52.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:52.131 "hdgst": false, 00:24:52.131 "ddgst": false 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "bdev_nvme_set_hotplug", 00:24:52.131 "params": { 00:24:52.131 "period_us": 100000, 00:24:52.131 "enable": false 00:24:52.131 } 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "method": "bdev_wait_for_examine" 00:24:52.131 } 00:24:52.131 ] 00:24:52.131 }, 00:24:52.131 { 00:24:52.131 "subsystem": "nbd", 00:24:52.131 "config": [] 00:24:52.131 } 00:24:52.131 ] 00:24:52.131 }' 00:24:52.131 04:20:45 keyring_file -- keyring/file.sh@114 -- # killprocess 101237 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 101237 ']' 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@952 -- # kill -0 101237 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101237 00:24:52.131 killing process with pid 101237 00:24:52.131 Received shutdown signal, test time was about 1.000000 seconds 00:24:52.131 00:24:52.131 Latency(us) 00:24:52.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.131 =================================================================================================================== 00:24:52.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101237' 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@967 -- # kill 101237 00:24:52.131 04:20:45 keyring_file -- common/autotest_common.sh@972 -- # wait 101237 00:24:52.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:52.389 04:20:45 keyring_file -- keyring/file.sh@117 -- # bperfpid=101474 00:24:52.389 04:20:45 keyring_file -- keyring/file.sh@119 -- # waitforlisten 101474 /var/tmp/bperf.sock 00:24:52.389 04:20:45 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 101474 ']' 00:24:52.389 04:20:45 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:52.389 04:20:45 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:52.389 04:20:45 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:52.389 "subsystems": [ 00:24:52.389 { 00:24:52.389 "subsystem": "keyring", 00:24:52.389 "config": [ 00:24:52.389 { 00:24:52.389 "method": "keyring_file_add_key", 00:24:52.389 "params": { 00:24:52.389 "name": "key0", 00:24:52.389 "path": "/tmp/tmp.7bCvRYNffZ" 00:24:52.389 } 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "method": "keyring_file_add_key", 00:24:52.389 "params": { 00:24:52.389 "name": "key1", 00:24:52.389 "path": "/tmp/tmp.P72vfmHMfu" 00:24:52.389 } 00:24:52.389 } 00:24:52.389 ] 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "subsystem": "iobuf", 00:24:52.389 "config": [ 00:24:52.389 { 00:24:52.389 "method": "iobuf_set_options", 00:24:52.389 "params": { 00:24:52.389 "small_pool_count": 8192, 00:24:52.389 "large_pool_count": 1024, 00:24:52.389 "small_bufsize": 8192, 00:24:52.389 "large_bufsize": 135168 00:24:52.389 } 00:24:52.389 } 00:24:52.389 ] 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "subsystem": "sock", 00:24:52.389 "config": [ 00:24:52.389 { 00:24:52.389 "method": "sock_set_default_impl", 00:24:52.389 "params": { 00:24:52.389 "impl_name": "uring" 00:24:52.389 } 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "method": "sock_impl_set_options", 00:24:52.389 "params": { 00:24:52.389 "impl_name": "ssl", 00:24:52.389 "recv_buf_size": 4096, 00:24:52.389 "send_buf_size": 4096, 00:24:52.389 "enable_recv_pipe": true, 00:24:52.389 "enable_quickack": false, 00:24:52.389 "enable_placement_id": 0, 00:24:52.389 "enable_zerocopy_send_server": true, 00:24:52.389 "enable_zerocopy_send_client": false, 00:24:52.389 "zerocopy_threshold": 0, 00:24:52.389 "tls_version": 0, 00:24:52.389 "enable_ktls": false 00:24:52.389 } 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "method": "sock_impl_set_options", 00:24:52.389 "params": { 00:24:52.389 "impl_name": "posix", 00:24:52.389 "recv_buf_size": 2097152, 00:24:52.389 "send_buf_size": 2097152, 00:24:52.389 "enable_recv_pipe": true, 00:24:52.389 "enable_quickack": false, 00:24:52.389 "enable_placement_id": 0, 00:24:52.389 "enable_zerocopy_send_server": true, 00:24:52.389 "enable_zerocopy_send_client": false, 00:24:52.389 "zerocopy_threshold": 0, 00:24:52.389 "tls_version": 0, 00:24:52.389 "enable_ktls": false 00:24:52.389 } 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "method": "sock_impl_set_options", 00:24:52.389 "params": { 00:24:52.389 "impl_name": "uring", 00:24:52.389 "recv_buf_size": 2097152, 00:24:52.389 "send_buf_size": 2097152, 00:24:52.389 "enable_recv_pipe": true, 00:24:52.389 "enable_quickack": false, 00:24:52.389 "enable_placement_id": 0, 00:24:52.389 "enable_zerocopy_send_server": false, 00:24:52.389 "enable_zerocopy_send_client": false, 00:24:52.389 "zerocopy_threshold": 0, 00:24:52.389 "tls_version": 0, 00:24:52.389 "enable_ktls": false 00:24:52.389 } 00:24:52.389 } 00:24:52.389 ] 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "subsystem": "vmd", 00:24:52.389 "config": [] 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "subsystem": "accel", 00:24:52.389 "config": [ 00:24:52.389 { 00:24:52.389 "method": "accel_set_options", 00:24:52.389 "params": { 00:24:52.389 "small_cache_size": 128, 00:24:52.389 "large_cache_size": 16, 00:24:52.389 "task_count": 2048, 00:24:52.389 "sequence_count": 2048, 00:24:52.389 "buf_count": 2048 00:24:52.389 } 00:24:52.389 } 00:24:52.389 ] 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "subsystem": "bdev", 00:24:52.389 "config": [ 00:24:52.389 { 00:24:52.389 "method": "bdev_set_options", 00:24:52.389 "params": { 00:24:52.389 "bdev_io_pool_size": 65535, 00:24:52.389 "bdev_io_cache_size": 256, 00:24:52.389 "bdev_auto_examine": true, 00:24:52.389 "iobuf_small_cache_size": 128, 00:24:52.389 "iobuf_large_cache_size": 16 00:24:52.389 } 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "method": "bdev_raid_set_options", 00:24:52.389 "params": { 00:24:52.389 "process_window_size_kb": 1024, 00:24:52.389 "process_max_bandwidth_mb_sec": 0 00:24:52.389 } 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "method": "bdev_iscsi_set_options", 00:24:52.389 "params": { 00:24:52.389 "timeout_sec": 30 00:24:52.389 } 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "method": "bdev_nvme_set_options", 00:24:52.389 "params": { 00:24:52.389 "action_on_timeout": "none", 00:24:52.389 "timeout_us": 0, 00:24:52.389 "timeout_admin_us": 0, 00:24:52.389 "keep_alive_timeout_ms": 10000, 00:24:52.389 "arbitration_burst": 0, 00:24:52.389 "low_priority_weight": 0, 00:24:52.389 "medium_priority_weight": 0, 00:24:52.389 "high_priority_weight": 0, 00:24:52.389 "nvme_adminq_poll_period_us": 10000, 00:24:52.389 "nvme_ioq_poll_period_us": 0, 00:24:52.389 "io_queue_requests": 512, 00:24:52.389 "delay_cmd_submit": true, 00:24:52.389 "transport_retry_count": 4, 00:24:52.389 "bdev_retry_count": 3, 00:24:52.389 "transport_ack_timeout": 0, 00:24:52.389 "ctrlr_loss_timeout_sec": 0, 00:24:52.389 "reconnect_delay_sec": 0, 00:24:52.389 "fast_io_fail_timeout_sec": 0, 00:24:52.389 "disable_auto_failback": false, 00:24:52.389 "generate_uuids": false, 00:24:52.389 "transport_tos": 0, 00:24:52.389 "nvme_error_stat": false, 00:24:52.389 "rdma_srq_size": 0, 00:24:52.389 "io_path_stat": false, 00:24:52.389 "allow_accel_sequence": false, 00:24:52.389 "rdma_max_cq_size": 0, 00:24:52.389 "rdma_cm_event_timeout_ms": 0, 00:24:52.389 "dhchap_digests": [ 00:24:52.389 "sha256", 00:24:52.389 "sha384", 00:24:52.389 "sha512" 00:24:52.389 ], 00:24:52.389 "dhchap_dhgroups": [ 00:24:52.389 "null", 00:24:52.389 "ffdhe2048", 00:24:52.389 "ffdhe3072", 00:24:52.389 "ffdhe4096", 00:24:52.389 "ffdhe6144", 00:24:52.389 "ffdhe8192" 00:24:52.389 ] 00:24:52.389 } 00:24:52.389 }, 00:24:52.389 { 00:24:52.389 "method": "bdev_nvme_attach_controller", 00:24:52.389 "params": { 00:24:52.390 "name": "nvme0", 00:24:52.390 "trtype": "TCP", 00:24:52.390 "adrfam": "IPv4", 00:24:52.390 "traddr": "127.0.0.1", 00:24:52.390 "trsvcid": "4420", 00:24:52.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:52.390 "prchk_reftag": false, 00:24:52.390 "prchk_guard": false, 00:24:52.390 "ctrlr_loss_timeout_sec": 0, 00:24:52.390 "reconnect_delay_sec": 0, 00:24:52.390 "fast_io_fail_timeout_sec": 0, 00:24:52.390 "psk": "key0", 00:24:52.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:52.390 "hdgst": false, 00:24:52.390 "ddgst": false 00:24:52.390 } 00:24:52.390 }, 00:24:52.390 { 00:24:52.390 "method": "bdev_nvme_set_hotplug", 00:24:52.390 "params": { 00:24:52.390 "period_us": 100000, 00:24:52.390 "enable": false 00:24:52.390 } 00:24:52.390 }, 00:24:52.390 { 00:24:52.390 "method": "bdev_wait_for_examine" 00:24:52.390 } 00:24:52.390 ] 00:24:52.390 }, 00:24:52.390 { 00:24:52.390 "subsystem": "nbd", 00:24:52.390 "config": [] 00:24:52.390 } 00:24:52.390 ] 00:24:52.390 }' 00:24:52.390 04:20:45 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.390 04:20:45 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:52.390 04:20:45 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.390 04:20:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:52.390 [2024-07-23 04:20:45.680787] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:24:52.390 [2024-07-23 04:20:45.680856] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101474 ] 00:24:52.646 [2024-07-23 04:20:45.796301] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:52.646 [2024-07-23 04:20:45.815493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.646 [2024-07-23 04:20:45.868878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.904 [2024-07-23 04:20:46.000644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:52.904 [2024-07-23 04:20:46.051892] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:53.469 04:20:46 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.469 04:20:46 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:53.469 04:20:46 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:53.469 04:20:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:53.469 04:20:46 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:53.727 04:20:46 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:53.727 04:20:46 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:53.727 04:20:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:53.727 04:20:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:53.727 04:20:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:53.727 04:20:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:53.727 04:20:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:53.727 04:20:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:53.727 04:20:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:53.727 04:20:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:53.727 04:20:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:53.727 04:20:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:53.727 04:20:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:53.727 04:20:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:53.985 04:20:47 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:53.985 04:20:47 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:53.985 04:20:47 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:53.985 04:20:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:54.244 04:20:47 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:54.244 04:20:47 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:54.244 04:20:47 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.7bCvRYNffZ /tmp/tmp.P72vfmHMfu 00:24:54.244 04:20:47 keyring_file -- keyring/file.sh@20 -- # killprocess 101474 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 101474 ']' 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 101474 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101474 00:24:54.244 killing process with pid 101474 00:24:54.244 Received shutdown signal, test time was about 1.000000 seconds 00:24:54.244 00:24:54.244 Latency(us) 00:24:54.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.244 =================================================================================================================== 00:24:54.244 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101474' 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@967 -- # kill 101474 00:24:54.244 04:20:47 keyring_file -- common/autotest_common.sh@972 -- # wait 101474 00:24:54.502 04:20:47 keyring_file -- keyring/file.sh@21 -- # killprocess 101220 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 101220 ']' 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 101220 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101220 00:24:54.502 killing process with pid 101220 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101220' 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@967 -- # kill 101220 00:24:54.502 [2024-07-23 04:20:47.716425] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:54.502 04:20:47 keyring_file -- common/autotest_common.sh@972 -- # wait 101220 00:24:54.761 00:24:54.761 real 0m14.359s 00:24:54.761 user 0m35.341s 00:24:54.761 sys 0m2.879s 00:24:54.761 04:20:48 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:54.761 04:20:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:54.761 ************************************ 00:24:54.761 END TEST keyring_file 00:24:54.761 ************************************ 00:24:54.761 04:20:48 -- common/autotest_common.sh@1142 -- # return 0 00:24:54.761 04:20:48 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:24:54.761 04:20:48 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:55.020 04:20:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:55.020 04:20:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.020 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:24:55.020 ************************************ 00:24:55.020 START TEST keyring_linux 00:24:55.020 ************************************ 00:24:55.020 04:20:48 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:55.020 * Looking for test storage... 00:24:55.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:55.020 04:20:48 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a3be551d-6f1c-46bc-8453-c5dfcf4c5274 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.020 04:20:48 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.020 04:20:48 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.020 04:20:48 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.020 04:20:48 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.020 04:20:48 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.020 04:20:48 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.020 04:20:48 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:55.020 04:20:48 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:55.020 04:20:48 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:55.020 04:20:48 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:55.020 04:20:48 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:55.020 04:20:48 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:55.020 04:20:48 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:55.020 04:20:48 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:55.020 /tmp/:spdk-test:key0 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:55.020 04:20:48 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:55.020 04:20:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:55.020 04:20:48 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:55.021 04:20:48 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:55.021 04:20:48 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:55.021 04:20:48 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:55.021 04:20:48 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:55.021 04:20:48 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:55.021 04:20:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:55.021 /tmp/:spdk-test:key1 00:24:55.021 04:20:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:55.021 04:20:48 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=101588 00:24:55.021 04:20:48 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:55.021 04:20:48 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 101588 00:24:55.021 04:20:48 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101588 ']' 00:24:55.021 04:20:48 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.021 04:20:48 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.021 04:20:48 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.021 04:20:48 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.021 04:20:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:55.279 [2024-07-23 04:20:48.389836] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:24:55.279 [2024-07-23 04:20:48.389944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101588 ] 00:24:55.279 [2024-07-23 04:20:48.511254] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:55.279 [2024-07-23 04:20:48.525487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.279 [2024-07-23 04:20:48.581184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.555 [2024-07-23 04:20:48.631808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:56.136 04:20:49 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.136 04:20:49 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:56.136 04:20:49 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:56.136 04:20:49 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.136 04:20:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:56.136 [2024-07-23 04:20:49.361287] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.136 null0 00:24:56.136 [2024-07-23 04:20:49.393264] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:56.136 [2024-07-23 04:20:49.393469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:56.136 04:20:49 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.136 04:20:49 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:56.137 776888835 00:24:56.137 04:20:49 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:56.137 1026303760 00:24:56.137 04:20:49 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=101605 00:24:56.137 04:20:49 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 101605 /var/tmp/bperf.sock 00:24:56.137 04:20:49 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:56.137 04:20:49 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101605 ']' 00:24:56.137 04:20:49 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:56.137 04:20:49 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.137 04:20:49 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:56.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:56.137 04:20:49 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.137 04:20:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:56.137 [2024-07-23 04:20:49.475884] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.07.0-rc2 initialization... 00:24:56.137 [2024-07-23 04:20:49.476342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101605 ] 00:24:56.395 [2024-07-23 04:20:49.600536] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:56.395 [2024-07-23 04:20:49.610472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.395 [2024-07-23 04:20:49.667907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.329 04:20:50 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.329 04:20:50 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:57.329 04:20:50 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:57.329 04:20:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:57.329 04:20:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:57.329 04:20:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:57.587 [2024-07-23 04:20:50.791234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:57.587 04:20:50 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:57.587 04:20:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:57.845 [2024-07-23 04:20:51.019804] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.845 nvme0n1 00:24:57.845 04:20:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:57.845 04:20:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:57.845 04:20:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:57.845 04:20:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:57.845 04:20:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:57.845 04:20:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:58.102 04:20:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:58.102 04:20:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:58.102 04:20:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:58.102 04:20:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:58.102 04:20:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:58.102 04:20:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.102 04:20:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.360 04:20:51 keyring_linux -- keyring/linux.sh@25 -- # sn=776888835 00:24:58.360 04:20:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:58.360 04:20:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:58.360 04:20:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 776888835 == \7\7\6\8\8\8\8\3\5 ]] 00:24:58.360 04:20:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 776888835 00:24:58.360 04:20:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:58.360 04:20:51 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:58.360 Running I/O for 1 seconds... 00:24:59.734 00:24:59.734 Latency(us) 00:24:59.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.734 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:59.734 nvme0n1 : 1.01 15549.72 60.74 0.00 0.00 8189.86 6851.49 17754.30 00:24:59.734 =================================================================================================================== 00:24:59.734 Total : 15549.72 60.74 0.00 0.00 8189.86 6851.49 17754.30 00:24:59.734 0 00:24:59.734 04:20:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:59.734 04:20:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:59.734 04:20:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:59.734 04:20:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:59.734 04:20:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:59.734 04:20:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:59.734 04:20:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.734 04:20:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:59.992 04:20:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:59.992 04:20:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:59.992 04:20:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:59.992 04:20:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:59.992 04:20:53 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:24:59.992 04:20:53 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:59.992 04:20:53 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:59.992 04:20:53 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.992 04:20:53 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:59.992 04:20:53 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.992 04:20:53 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:59.992 04:20:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:00.249 [2024-07-23 04:20:53.446168] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:00.249 [2024-07-23 04:20:53.447105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69b8d0 (107): Transport endpoint is not connected 00:25:00.249 [2024-07-23 04:20:53.448080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69b8d0 (9): Bad file descriptor 00:25:00.249 [2024-07-23 04:20:53.449077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.249 [2024-07-23 04:20:53.449096] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:00.249 [2024-07-23 04:20:53.449106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.249 request: 00:25:00.249 { 00:25:00.249 "name": "nvme0", 00:25:00.249 "trtype": "tcp", 00:25:00.249 "traddr": "127.0.0.1", 00:25:00.249 "adrfam": "ipv4", 00:25:00.249 "trsvcid": "4420", 00:25:00.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:00.249 "prchk_reftag": false, 00:25:00.249 "prchk_guard": false, 00:25:00.249 "hdgst": false, 00:25:00.249 "ddgst": false, 00:25:00.249 "psk": ":spdk-test:key1", 00:25:00.249 "method": "bdev_nvme_attach_controller", 00:25:00.249 "req_id": 1 00:25:00.249 } 00:25:00.249 Got JSON-RPC error response 00:25:00.249 response: 00:25:00.249 { 00:25:00.249 "code": -5, 00:25:00.249 "message": "Input/output error" 00:25:00.249 } 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@33 -- # sn=776888835 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 776888835 00:25:00.249 1 links removed 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@33 -- # sn=1026303760 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1026303760 00:25:00.249 1 links removed 00:25:00.249 04:20:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 101605 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101605 ']' 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101605 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101605 00:25:00.249 killing process with pid 101605 00:25:00.249 Received shutdown signal, test time was about 1.000000 seconds 00:25:00.249 00:25:00.249 Latency(us) 00:25:00.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.249 =================================================================================================================== 00:25:00.249 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101605' 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@967 -- # kill 101605 00:25:00.249 04:20:53 keyring_linux -- common/autotest_common.sh@972 -- # wait 101605 00:25:00.506 04:20:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 101588 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101588 ']' 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101588 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101588 00:25:00.506 killing process with pid 101588 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101588' 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@967 -- # kill 101588 00:25:00.506 04:20:53 keyring_linux -- common/autotest_common.sh@972 -- # wait 101588 00:25:00.764 00:25:00.764 real 0m5.922s 00:25:00.764 user 0m11.275s 00:25:00.764 sys 0m1.507s 00:25:00.764 04:20:54 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.764 ************************************ 00:25:00.764 END TEST keyring_linux 00:25:00.764 ************************************ 00:25:00.764 04:20:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 04:20:54 -- common/autotest_common.sh@1142 -- # return 0 00:25:00.764 04:20:54 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:00.764 04:20:54 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:00.764 04:20:54 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:00.764 04:20:54 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:00.764 04:20:54 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:00.764 04:20:54 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:25:00.764 04:20:54 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:25:00.764 04:20:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.764 04:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 04:20:54 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:25:00.764 04:20:54 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:00.764 04:20:54 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:00.764 04:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:02.663 INFO: APP EXITING 00:25:02.663 INFO: killing all VMs 00:25:02.663 INFO: killing vhost app 00:25:02.663 INFO: EXIT DONE 00:25:02.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:03.178 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:03.178 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:03.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:03.744 Cleaning 00:25:03.744 Removing: /var/run/dpdk/spdk0/config 00:25:03.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:03.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:03.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:03.744 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:03.744 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:03.744 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:03.744 Removing: /var/run/dpdk/spdk1/config 00:25:03.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:03.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:03.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:03.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:03.744 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:03.744 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:03.744 Removing: /var/run/dpdk/spdk2/config 00:25:03.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:03.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:03.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:03.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:03.744 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:03.744 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:03.744 Removing: /var/run/dpdk/spdk3/config 00:25:03.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:03.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:03.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:03.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:03.744 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:03.744 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:03.744 Removing: /var/run/dpdk/spdk4/config 00:25:03.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:03.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:03.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:03.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:03.744 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:03.744 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:04.002 Removing: /dev/shm/nvmf_trace.0 00:25:04.002 Removing: /dev/shm/spdk_tgt_trace.pid72411 00:25:04.002 Removing: /var/run/dpdk/spdk0 00:25:04.002 Removing: /var/run/dpdk/spdk1 00:25:04.002 Removing: /var/run/dpdk/spdk2 00:25:04.002 Removing: /var/run/dpdk/spdk3 00:25:04.002 Removing: /var/run/dpdk/spdk4 00:25:04.002 Removing: /var/run/dpdk/spdk_pid100410 00:25:04.002 Removing: /var/run/dpdk/spdk_pid100440 00:25:04.002 Removing: /var/run/dpdk/spdk_pid100481 00:25:04.002 Removing: /var/run/dpdk/spdk_pid100730 00:25:04.002 Removing: /var/run/dpdk/spdk_pid100760 00:25:04.002 Removing: /var/run/dpdk/spdk_pid100795 00:25:04.002 Removing: /var/run/dpdk/spdk_pid101220 00:25:04.002 Removing: /var/run/dpdk/spdk_pid101237 00:25:04.002 Removing: /var/run/dpdk/spdk_pid101474 00:25:04.002 Removing: /var/run/dpdk/spdk_pid101588 00:25:04.002 Removing: /var/run/dpdk/spdk_pid101605 00:25:04.002 Removing: /var/run/dpdk/spdk_pid72266 00:25:04.002 Removing: /var/run/dpdk/spdk_pid72411 00:25:04.002 Removing: /var/run/dpdk/spdk_pid72608 00:25:04.002 Removing: /var/run/dpdk/spdk_pid72696 00:25:04.002 Removing: /var/run/dpdk/spdk_pid72718 00:25:04.002 Removing: /var/run/dpdk/spdk_pid72833 00:25:04.002 Removing: /var/run/dpdk/spdk_pid72851 00:25:04.002 Removing: /var/run/dpdk/spdk_pid72969 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73159 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73294 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73364 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73427 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73518 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73595 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73628 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73658 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73724 00:25:04.002 Removing: /var/run/dpdk/spdk_pid73819 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74236 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74282 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74333 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74349 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74417 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74433 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74502 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74518 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74558 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74576 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74627 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74641 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74762 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74792 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74872 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74918 00:25:04.002 Removing: /var/run/dpdk/spdk_pid74948 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75007 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75041 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75070 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75109 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75139 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75178 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75208 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75243 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75277 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75312 00:25:04.002 Removing: /var/run/dpdk/spdk_pid75346 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75383 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75417 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75452 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75481 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75521 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75550 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75593 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75625 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75660 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75695 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75765 00:25:04.003 Removing: /var/run/dpdk/spdk_pid75853 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76157 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76175 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76211 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76225 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76240 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76265 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76278 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76299 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76318 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76332 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76353 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76372 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76385 00:25:04.003 Removing: /var/run/dpdk/spdk_pid76401 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76425 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76439 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76460 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76479 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76491 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76508 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76544 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76552 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76587 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76651 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76674 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76689 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76712 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76727 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76729 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76777 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76791 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76819 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76834 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76838 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76853 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76857 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76872 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76876 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76891 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76920 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76946 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76961 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76990 00:25:04.265 Removing: /var/run/dpdk/spdk_pid76999 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77007 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77047 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77059 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77085 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77093 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77100 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77108 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77115 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77123 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77130 00:25:04.265 Removing: /var/run/dpdk/spdk_pid77138 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77212 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77254 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77358 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77391 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77437 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77457 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77479 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77488 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77525 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77546 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77616 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77632 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77676 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77751 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77807 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77838 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77931 00:25:04.266 Removing: /var/run/dpdk/spdk_pid77973 00:25:04.266 Removing: /var/run/dpdk/spdk_pid78006 00:25:04.266 Removing: /var/run/dpdk/spdk_pid78230 00:25:04.266 Removing: /var/run/dpdk/spdk_pid78322 00:25:04.266 Removing: /var/run/dpdk/spdk_pid78345 00:25:04.266 Removing: /var/run/dpdk/spdk_pid78690 00:25:04.266 Removing: /var/run/dpdk/spdk_pid78728 00:25:04.266 Removing: /var/run/dpdk/spdk_pid79009 00:25:04.266 Removing: /var/run/dpdk/spdk_pid79430 00:25:04.266 Removing: /var/run/dpdk/spdk_pid79693 00:25:04.266 Removing: /var/run/dpdk/spdk_pid80473 00:25:04.266 Removing: /var/run/dpdk/spdk_pid81264 00:25:04.266 Removing: /var/run/dpdk/spdk_pid81382 00:25:04.266 Removing: /var/run/dpdk/spdk_pid81448 00:25:04.266 Removing: /var/run/dpdk/spdk_pid82710 00:25:04.266 Removing: /var/run/dpdk/spdk_pid82954 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86137 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86428 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86538 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86676 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86692 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86715 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86741 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86813 00:25:04.266 Removing: /var/run/dpdk/spdk_pid86942 00:25:04.266 Removing: /var/run/dpdk/spdk_pid87084 00:25:04.266 Removing: /var/run/dpdk/spdk_pid87158 00:25:04.266 Removing: /var/run/dpdk/spdk_pid87334 00:25:04.266 Removing: /var/run/dpdk/spdk_pid87418 00:25:04.266 Removing: /var/run/dpdk/spdk_pid87511 00:25:04.266 Removing: /var/run/dpdk/spdk_pid87811 00:25:04.266 Removing: /var/run/dpdk/spdk_pid88153 00:25:04.535 Removing: /var/run/dpdk/spdk_pid88161 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90371 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90374 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90639 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90653 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90673 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90702 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90708 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90787 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90795 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90903 00:25:04.535 Removing: /var/run/dpdk/spdk_pid90905 00:25:04.535 Removing: /var/run/dpdk/spdk_pid91013 00:25:04.535 Removing: /var/run/dpdk/spdk_pid91015 00:25:04.535 Removing: /var/run/dpdk/spdk_pid91411 00:25:04.535 Removing: /var/run/dpdk/spdk_pid91454 00:25:04.535 Removing: /var/run/dpdk/spdk_pid91563 00:25:04.535 Removing: /var/run/dpdk/spdk_pid91641 00:25:04.535 Removing: /var/run/dpdk/spdk_pid91934 00:25:04.535 Removing: /var/run/dpdk/spdk_pid92130 00:25:04.535 Removing: /var/run/dpdk/spdk_pid92488 00:25:04.535 Removing: /var/run/dpdk/spdk_pid92988 00:25:04.535 Removing: /var/run/dpdk/spdk_pid93778 00:25:04.535 Removing: /var/run/dpdk/spdk_pid94356 00:25:04.535 Removing: /var/run/dpdk/spdk_pid94358 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96222 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96285 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96344 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96393 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96508 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96555 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96614 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96674 00:25:04.535 Removing: /var/run/dpdk/spdk_pid96984 00:25:04.535 Removing: /var/run/dpdk/spdk_pid98142 00:25:04.535 Removing: /var/run/dpdk/spdk_pid98283 00:25:04.535 Removing: /var/run/dpdk/spdk_pid98513 00:25:04.535 Removing: /var/run/dpdk/spdk_pid99068 00:25:04.535 Removing: /var/run/dpdk/spdk_pid99221 00:25:04.535 Removing: /var/run/dpdk/spdk_pid99378 00:25:04.535 Removing: /var/run/dpdk/spdk_pid99475 00:25:04.535 Removing: /var/run/dpdk/spdk_pid99647 00:25:04.535 Removing: /var/run/dpdk/spdk_pid99756 00:25:04.535 Clean 00:25:04.535 04:20:57 -- common/autotest_common.sh@1451 -- # return 0 00:25:04.535 04:20:57 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:25:04.535 04:20:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.535 04:20:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.535 04:20:57 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:25:04.535 04:20:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.535 04:20:57 -- common/autotest_common.sh@10 -- # set +x 00:25:04.535 04:20:57 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:04.535 04:20:57 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:04.535 04:20:57 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:04.792 04:20:57 -- spdk/autotest.sh@391 -- # hash lcov 00:25:04.792 04:20:57 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:04.792 04:20:57 -- spdk/autotest.sh@393 -- # hostname 00:25:04.792 04:20:57 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:04.792 geninfo: WARNING: invalid characters removed from testname! 00:25:26.719 04:21:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:28.618 04:21:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:30.517 04:21:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:33.069 04:21:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:35.597 04:21:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:37.496 04:21:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:40.026 04:21:32 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:40.026 04:21:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.026 04:21:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:40.026 04:21:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.026 04:21:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.026 04:21:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.026 04:21:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.026 04:21:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.026 04:21:33 -- paths/export.sh@5 -- $ export PATH 00:25:40.026 04:21:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.026 04:21:33 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:40.026 04:21:33 -- common/autobuild_common.sh@447 -- $ date +%s 00:25:40.026 04:21:33 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721708493.XXXXXX 00:25:40.026 04:21:33 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721708493.hTr6rC 00:25:40.026 04:21:33 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:25:40.026 04:21:33 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:25:40.026 04:21:33 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:40.026 04:21:33 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:40.026 04:21:33 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:40.026 04:21:33 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:40.026 04:21:33 -- common/autobuild_common.sh@463 -- $ get_config_params 00:25:40.026 04:21:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:40.026 04:21:33 -- common/autotest_common.sh@10 -- $ set +x 00:25:40.026 04:21:33 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:40.026 04:21:33 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:25:40.026 04:21:33 -- pm/common@17 -- $ local monitor 00:25:40.026 04:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:40.026 04:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:40.026 04:21:33 -- pm/common@25 -- $ sleep 1 00:25:40.026 04:21:33 -- pm/common@21 -- $ date +%s 00:25:40.026 04:21:33 -- pm/common@21 -- $ date +%s 00:25:40.026 04:21:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721708493 00:25:40.026 04:21:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721708493 00:25:40.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721708493_collect-vmstat.pm.log 00:25:40.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721708493_collect-cpu-load.pm.log 00:25:40.975 04:21:34 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:25:40.975 04:21:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:40.975 04:21:34 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:40.975 04:21:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:40.975 04:21:34 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:25:40.975 04:21:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:40.975 04:21:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:40.975 04:21:34 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:40.975 04:21:34 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:40.975 04:21:34 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:40.975 04:21:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:40.975 04:21:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:40.975 04:21:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:40.975 04:21:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:40.975 04:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:40.975 04:21:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:40.975 04:21:34 -- pm/common@44 -- $ pid=103370 00:25:40.975 04:21:34 -- pm/common@50 -- $ kill -TERM 103370 00:25:40.975 04:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:40.975 04:21:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:40.975 04:21:34 -- pm/common@44 -- $ pid=103371 00:25:40.975 04:21:34 -- pm/common@50 -- $ kill -TERM 103371 00:25:40.975 + [[ -n 6005 ]] 00:25:40.975 + sudo kill 6005 00:25:40.984 [Pipeline] } 00:25:41.005 [Pipeline] // timeout 00:25:41.013 [Pipeline] } 00:25:41.032 [Pipeline] // stage 00:25:41.039 [Pipeline] } 00:25:41.056 [Pipeline] // catchError 00:25:41.065 [Pipeline] stage 00:25:41.067 [Pipeline] { (Stop VM) 00:25:41.082 [Pipeline] sh 00:25:41.363 + vagrant halt 00:25:44.643 ==> default: Halting domain... 00:25:51.236 [Pipeline] sh 00:25:51.515 + vagrant destroy -f 00:25:54.796 ==> default: Removing domain... 00:25:54.808 [Pipeline] sh 00:25:55.086 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:55.095 [Pipeline] } 00:25:55.112 [Pipeline] // stage 00:25:55.118 [Pipeline] } 00:25:55.134 [Pipeline] // dir 00:25:55.140 [Pipeline] } 00:25:55.157 [Pipeline] // wrap 00:25:55.164 [Pipeline] } 00:25:55.179 [Pipeline] // catchError 00:25:55.189 [Pipeline] stage 00:25:55.192 [Pipeline] { (Epilogue) 00:25:55.208 [Pipeline] sh 00:25:55.489 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:00.796 [Pipeline] catchError 00:26:00.798 [Pipeline] { 00:26:00.812 [Pipeline] sh 00:26:01.091 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:01.091 Artifacts sizes are good 00:26:01.100 [Pipeline] } 00:26:01.116 [Pipeline] // catchError 00:26:01.126 [Pipeline] archiveArtifacts 00:26:01.133 Archiving artifacts 00:26:01.298 [Pipeline] cleanWs 00:26:01.309 [WS-CLEANUP] Deleting project workspace... 00:26:01.309 [WS-CLEANUP] Deferred wipeout is used... 00:26:01.316 [WS-CLEANUP] done 00:26:01.318 [Pipeline] } 00:26:01.336 [Pipeline] // stage 00:26:01.341 [Pipeline] } 00:26:01.357 [Pipeline] // node 00:26:01.363 [Pipeline] End of Pipeline 00:26:01.402 Finished: SUCCESS